Ruby-1.9.3-p429 hangs when calling OpenSSL

So I’ve been debugging a problem for a better part of today. We first noticed an issue when our test suite was taking forever to finish and it turns out that a certain server we integrated was timing out on every single test. We initially chalked it up to the server being slow, but when we have 10 tests each taking 60 seconds to timeout, it adds a big chunk of time to run our test suites.

To provide some more background, our Rails on Ruby app uses ActiveMerchant to connect to NMI to process transactions. We kept getting the following error: “The connection to the remote server timed out”

The weird thing was that it was only happening on our Macs running OSX 10.8.3 (Mountain Lion), but not on our production server which is running Ubuntu.

So I decided to spend some time debugging the issue. I found out if I switched back to ruby-1.9.3-p392, everything worked fine. I thought maybe my ruby was compiled incorrectly, so I recompiled ruby-1.9.3-p429, but that didn’t seem to fix the problem.

Tracing the code:

  • ssl_post
  • ssl_request
  • raw_ssl_request

which eventually generates an Net::HTTP connection and makes the SSL request.

So I wrote a little test to see what happens:
h = Net::HTTP.new('secure.networkmerchants.com', 443).tap do |http|
  http.use_ssl = true
end

h.post('/api/transact.php', '')

In p392, I would get:
#<Net::HTTPOK 200 OK readbody=true>

But in p429, I would get:
Errno::ECONNRESET: Connection reset by peer - SSL_connect

Searching for that error string, I eventually came upon openssl. I found out that in p429, it had switched to using homebrew’s version of openssl (version 1.0.1e) instead of the system’s version of openssl (version 0.9.8r).

Using openssl 0.9.8r, everything worked fine, but when using openssl 1.0.1e, the connection was timing out and getting the following error:
$ openssl s_client -connect secure.networkmerchants.com:443

CONNECTED(00000003)
write:errno=54
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 0 bytes and written 322 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
---

I contacted the openssl users mailing list and got back the following response:

This is most likely another case of the frequently reported (and discussed) issue that 1.0.1 implements TLS1.2, which has more ciphersuites enabled by default and additional extensions, which together make the ClientHello bigger, and some server implementations apparently can’t cope. It appears in at least many cases the cutoff is 256 bytes, suggesting these servers don’t handle 2-byte length right.

It’s unlikely that this would be explicitly configured on a server, rather it would be an implementation flaw that previously did not cause a problem. It might occur in an older version of server software fixed in a newer version.

For many details see
http://rt.openssl.org/Ticket/Display.html?id=2771&user=guest&pass=guest

Short answer is that restricting to TLS1(.0), and/or a smaller list of ciphersuites (but still enough to intersect with the server), likely works. Both do for me using 1.0.1e to your example host. You can use -msg in s_client to see exactly how much (and what) is sent for different options.

So I tried setting the ssl version to :TLSv1, but that didn’t seem to work. Setting it to ssl version to SSLv3 did though.
http.ssl_version = :SSLv3

Following the example from Forcing SSL TLSv1, I was able to override the ssl_version of the http connection that ssl_post creates:

class SSLv3Connection < ActiveMerchant::Connection   def configure_ssl(http)     super(http)     http.ssl_version = :SSLv3   end end def new_connection(endpoint)   SSLv3Connection.new(endpoint) end

Reminder Issues in iCal with Google Calendar

I’ve been using Fantastical on my Mac and iPhone as my default Calendar application these days and it’s been wonderful, or should I say fantastical. Anyway, I recently hit into a weird issue that I initially thought was a bug in Fantastical, but upon further investigation appears to be a bigger issue between Google Calendar and Mac’s Calendar app (previously known and will be referred later in this article as iCal).

I noticed whenever I tried changing reminders in Fantastical, despite the change appearing to be successful, if I reopen the event, the reminders are reverted. Not only that, when I create a new event in Fantastical, it always uses the default reminders, despite the default being no reminders. Even when I add a reminder before creating the event, my reminder gets removed and the defaults are used.

I tried to create a simple event in iCal and was able to set the reminder that I wanted. So I contacted @flexbits (maker of Fantastical) on Twitter and they basically told me the reminder settings get overridden by Google.

So I did some more testing and here’s what I actually found:

  1. In iCal, you can set the reminder you want when creating the event, but once the event is created, you can’t change the reminders anymore. It’ll appear that any reminder you modify, add, or remove has succeeded, but if you open up the event details again, you’ll find the reminder settings have been reverted to what it was previously.
  2. Despite the reminder settings being reverted in iCal, Google Calendar actually shows the reminder changes that you’ve made. So there appears to be some sync issue between iCal and Google Calendar where iCal can push up reminder changes, but can’t pull those changes down. Not sure if the bug lies in Google’s implementation or iCal’s implementation.

Since Fanstaical always uses the default reminders, it leads me to think that creating an event is a multi-step process in Fantastical, where adding the reminders is a latter step, therefore it gets ignored.

Don’t have a solution here. Just wanted to share my findings. Seems like if you want to use Google Calendar and not use the default reminders, your best bet is to create the events in iCal and set the reminders you want.

get_template_part_content for Single Posts

I’ve been looking for a non-intrusive way to insert text before any of my WordPress Single Posts and Pages. In the past, I modified the themes directly, but since I didn’t own the themes I was using, anytime I updated the theme, I would lose all my changes.

I had found I can insert text at the top and bottom of my pages using add_action. For example:

add_action( 'get_footer', 'echo_hello_world' );

function echo_hello_world( $post )
{
   echo "hello world";
}

would add the phrase “hello world” to the bottom of every single page on my blog. You can find a list of tags on WordPress’ Action Reference page.

One of the things I wanted to do was insert some text beneath my header, but before my post. I had found get_template_part_content would do the trick on Pages, but not Single Posts. After I failed at finding a solution online, I looked into how the code differed between Pages and Single Posts.

Single Posts were calling:
get_template_part( 'content-single', get_post_format() );

while Pages were calling:
get_template_part( 'content', 'page' );

So I wondered if get_template_part_content_single would work, but nothing happened. I then tried get_template_part_content-single, lo and behold, it WORKED! Given that Google has 0 search results with that exact phrase, I’m guessing no one’s discovered this action tag or at least posted about it publicly.

So my code ended up looking like:

add_action( 'get_template_part_content', 'echo_hello_world' );
add_action( 'get_template_part_content-single', 'echo_hello_world' );

function echo_hello_world( $post )
{
   if( is_single() || is_page() )
   {
       echo "hello world";
   }
}

As a bonus, here’s a tag you can use to insert text between your post and the comments section: comments_template

Greyhole (Alternative for Windows Home Server Drive Extender)

Greyhole Storage Pool

Previous Storage Solutions

So I had used Windows Home Server when it first came out and loved their Drive Extender feature. It allowed you to add as many drives into a pool and then create shared folders. You can specify how many copies of a file you wanted for each shared folder, therefore providing redundancy. I was extremely sad when they announced that Drive Extender would be removed from Windows Home Server 2011, meaning I would have to look for a new solution.

In the meantime, I had switched to using Macs and had began using SoftRAID as my storage solution. Ultimately, I was highly disappointed with the software given its price. I had set up 2 volumes, a 6TB RAID-0 array and a 3TB RAID-1 array. Both arrays consists of 2 external USB 3TB drives. Things worked great as long as you didn’t reboot. But when you did reboot, there was a 25% chance that the USB drive wouldn’t be mounted in time before SoftRAID times out and marks the array as degraded. Things would be fine if you could just re-add the drive to the array, but unfortunatey it didn’t work like that. It required you to reinitialize the drive and add it as if it was a brand new drive to the array. The rebuild was the worse part as it took 2 days for the drive to be fully rebuilt.

I’ve sent numerous emails to their support requesting the ability to set a drive to read-only until the mirrored drive is remounted or replaced, but it’s fallen on deaf ears. And I’m not the only one that’s hit into this problem. I’m happy to say I’ve officially uninstalled and completely removed SoftRAID from my machine.

Awhile back, I had read about Greyhole being an alternative to Windows Home Server Drive Extender, and finally decided to check it out.

Building the Rig

I decided that I need to build a new rig for my storage server, mainly due to the fact I wanted to take advantage that my external USB drives were USB 3.0.

I had picked up a barebone combo from Newegg a couple weeks ago for less than $200, which included BIOSTAR Hi-Fi A85W, AMD A10-5800K Trinity 3.8GHz, G.SKILL Ripjaws X Series 8GB 240-Pin DDR3 SDRAM. The main reason this combo worked out was because it included everything (sans case and power supply). It also had 8 on-board SATA ports as well as onboard graphics. It’s a pity it only came with 1 stick of RAM, but do I really need more than 8GB of RAM for a file server? There was some free headphones too, but that’ll probably end up in my pile of useless junk in my garage.

I had also picked up a USB 3.0 motherboard adapter, giving me 2 extra USB 3.0 ports.

USB 3.0 Hubs

I don’t know why, but USB hubs tend to fail when dealing with large amounts of data transfers. I’ve tried at least a dozen different USB hubs, and the only one that has work consistently was this Belkin 7-port USB hub, but unfortunately this only supported up to USB 2.0.

Even after the USB 3.0 motherboard adapter, I now had 4x USB 3.0 ports, but 6 external USB drives. So I decided to try Monoprice’s 4-port USB 3.0 hub and am sad to report that it also fails under the high bandwidth scenario. By fail, I mean drives just disappear from the system, and that’s not good for a storage server.

So I ended up connecting 4 of the drives directly to the motherboard’s USB 3.0 ports and 2 of them to USB 2.0 ports. If someone has a good USB 3.0 hub recommendation, I’m all ears.

Installing Amahi + Greyhole

Why Amahi? Greyhole is the storage pool software, but Amahi provides a decent user interface on top of it. Since Amahi recommended Ubuntu 12.04 LTS, that’s what I installed. Ubuntu’s install was rather straight forward. I had to burn a DVD for the first time in a long while. Apparently getting Ubuntu to install via a USB thumbdrive isn’t very straight forward.

Following their instructions, I had a bit of a problem getting Amahi installed. It would failed randomly throughout the installation process, but the third time was the charm. I forgot what I did to fix it, but recall installing some dependencies it was expecting. It also probably explains why Greyhole wasn’t setup properly and I had to rerun the initialization scripts to setup the mysql databases.

Adding Drives

Following their instructions, I got all my drives setup and auto-mounted. I recommend appending nobootwait in your options, so if Ubuntu fails to mount your hdd, it won’t just hang. It’s unfortunate that it will block before the ssh daemon is started, providing no way to fix this remotely, even if you just want it to continue w/o mounting the drive.

I would also recommend physically labeling your drives to match up with their mount location (e.g. drive1, drive2, etc.) so you know which drive is which when it dies and know which one to replace. Fortunately with Greyhole, even if you accidentally remove the wrong drive, sticking it back in is not that big of a deal.

Once the drives are added, they should show up on your Dashboard > Storage:

Amahi Storage

Setting up Greyhole

By default, Greyhole is disabled and you’ll have to enable it in advanced settings. Once enabled, more options under Shares should show up. First you’ll have to select which drives you want to be part of the storage pool. As you can see from screenshot at the top, I’ve enabled everything beside my main OS drive. Adding the main OS drive to the pool is not recommended as you don’t want to run out of space on that drive.

New options should also show up for the shared folders:

Greyhole Shared Folder Options

As you can see, I enabled this specific share to use the pool and to have 2 extra copies (so a total of 3 copies of each file on 3 separate drives). The # of extra copies you can set ranges from 0 to max (# of drives in your pool).

Connecting to Shared Folders

Greyhole exposes these shared folders as SMB/CIFS network drives, very much like Windows shared folders. By default it adds an hda machine name to your network, so to connect to it on a Mac, you’ll connect to smb://hda/name_of_share.

I’ve found that on the Mac, if you want it to automatically connect to the network share, the easiest way is to drag and drop the share to your list of login items in Settings > Users & Groups. Unfortunately this has the side effect of opening all the share windows when you login. In my case I have about 10 different share folders that get opened upon logging in.

How Greyhole Works

One of the things that confused me early on was the fact that after making changes to a particular share, Greyhole didn’t seem to do anything. Logs showed that it was just sleeping, even though I just told it to add an extra copy of everything in this non-empty share.

It turns out Greyhole is a daemon service and it will act when new files show up, but any settings changes you make to an existing share don’t actually execute until it does it’s nightly file system check. I’ve learnt you can manually trigger the file system check by running sudo greyhole -fsck.

When you copy files into a share, they all get dumped into a default location (configurable). The greyhole daemon service will kick in and depending on your settings, begin moving or duplicating files onto drives designated to be part of the pool.

I do have to warn you never to run sudo greyhole -D as it starts a new instance of the Greyhole daemon which confuses the shit out of each other. When one service was copying a file, another was deleting it. When the 1st service sees the file is now gone, it assumed the user deleted it and deletes all its copies. Good thing I always double check with rsync to make sure all copied files are good.

What you should do instead is: sudo service greyhole start | restart | stop

Greyhole also has an option to balance available space among the drives, which I assume means if you added a new drive to the pool, it would shift the files around, so you won’t have one drive that is fully packed, while another is completely empty. The command to do this is sudo greyhole -l, but I haven’t really seen it do much.

Removing / Replacing Drives

One of the best benefits of Greyhole / Drive Extender is the ease of adding/removing drives. I’ve already discussed about adding drives, and removing drives isn’t that much more work. If a drive is still working, but will be removed, you mark it as “going”. Greyhole will begin to move the files off of it onto other drives in the pool. If a drive is dead, you mark it as “gone”. For files that don’t have extra copies, unfortunately those aren’t recoverable. For files that do have duplicate copies, Greyhole will begin duplicating them off other drives to ensure the # of copies matches your settings.

Once the process completes, it’s safe to remove the drive.

Trash

You ever get that dreaded warning when you’re deleting files from a network share warning you that deleting this file will be permanent? Yeah, I always double check to make sure I didn’t accidentally select the wrong file (god forgive, wrong folders). With Greyhole, when you delete a file, it gets moved to a Trash shared folder, which you can mount and access like your recycle bin. To empty the trash, just run sudo greyhole -a

Monitoring Greyhole

There’s a couple ways to monitor what Greyhole is doing.

  • watch greyhole -q displays your list of shares and # of operations queued.
  • greyhole -s shows you the stats of your drives (e.g total/free space)
  • greyhole -S shows you a summary of what Greyhole is doing.
  • greyhole -L tails the log file telling you exactly what it’s doing.

Remote Access

A couple things you want to install right off the bat are openssh server and vncserver.

I found it rather weird that a linux distro didn’t have an ssh daemon installed by default. It does have vnc server, but it requires you to login at the console before you can “share desktop”. The reason why you need VNC is because the Amahi dashboard is only accessible on that machine. For most Greyhole functions, you can control it via SSH.

I had a lot of trouble getting the VNC server working (w/o having a user logged in). Unfortunately detailing how I got it to work would be another giant post in itself, but if you have questions, leave a comment below and I’ll try my best to answer.

BluetoothAdapter.getDefaultAdapter() throws RuntimeException: Can’t create handler inside thread that has not called Looper.prepare()

So I’m using some hardware SDK that attempts to scan for bluetooth devices on Android and hit into this exception below when calling: BluetoothAdapter.getDefaultAdapter()

Caused by: java.lang.RuntimeException: Can't create handler inside thread that has not called Looper.prepare()
    at android.os.Handler.<init>(Handler.java:121)
    at android.bluetooth.BluetoothAdapter$1.<init>(BluetoothAdapter.java:984)
    at android.bluetooth.BluetoothAdapter.<init>(BluetoothAdapter.java:984)
    at android.bluetooth.BluetoothAdapter.getDefaultAdapter(BluetoothAdapter.java:329)

After researching this problem, it turns out there’s a bug in Android (which apparently still exists in Android 4.0: Ice Cream Sandwich according to the discussions I’ve been reading). The bug is during the initialization of the default adapter, there exists UI code which means it needs to be ran on the UI thread.

Update 2013/04/22: Martin has brought to my attention that as long as you call BluetoothAdapter.getDefaultAdapter() inside any thread that has already called Looper.prepare(), it should work. It doesn’t have to be the UI thread.

I was attempting to use AsyncTask to search for bluetooth devices in the background. Workaround suggest running the code in the main thread or using Handler so that code is queued in the main thread. Unfortunately both solutions block the UI thread and what you see is a progress dialog with a spinner that stops spinning.

It turns out however if you call BluetoothAdapter.getDefaultAdapter() in the main UI thread, calling it subsequently no longer crashes, even in background threads.

Hope that helps!

Twitter Weekly Updates for 2012-10-14

  • @mendkr Glad you made it back! Sounded like quite a trip. in reply to mendkr #
  • Nowadays, even if one engine fails and blows up, the rocket can still make it into space: http://t.co/4UAkaQGq #
  • A good way to find out which apps are using location services on iOS 6: Settings > General > Restrictions > Location Services. #
  • Turns out Just Landed was still tracking a 3-week old flight and Passbook was constantly checking if I'm near my favorite Starbucks. #
  • @rothgar Picture makes you look like you were face-planted into the sand ;p in reply to rothgar #
  • Not as cute as Om nom, but Petit from Contre Jour is cute and fun. Reminds me of Limbo. Now available on HTML5: http://t.co/trlW7ORn #
  • I wonder if flies and other insects feel pain when they ram into a window. If they do, they certainly don’t learn not to do it again. #
  • @fearthecowboy I know! They say the comment system should be back up soon, but not soon enough! in reply to fearthecowboy #