Obihai working with Google Voice once again!

Obihai just sent out an email to all their users announcing that once again Google Voice is now officially supported on OBi VoIP devices.

Google had announced they were disabling XMPP support awhile back, causing Obihai and other VoIP device manufacturers who were piggybacking on Google Voice to scramble for a new free/cheap solution. XMPP access was suppose to be disabled back in May, but I don’t think Google has actually pulled the plug yet. Though that hasn’t stopped me from finding a new solution while we wait for the inevitable.

Yesterday, Google announced they’ve integrated Google Voice into Google Hangouts. Today, Obihai announces that they’re officially supporting Google Voice again. Not sure if that’s purely a coincidence or the integration has opened up some access point that Obihai can now connect to.

Setting up your Obihai device with Google Voice is simpler than before:

  1. Log into https://www.obitalk.com and go to your Dashboard
  2. Select your OBi device from the list
  3. Click on the new Google Voice Set-Up button above your service provider list
    Google Voice Set-Up
  4. If this is your first time setting up with new Google Voice, it should prompt you to update your firmware which should take 1-2 minutes
  5. Once the firmware update completes, enter your area code and link it up with your Google Voice account

Tada! That’s it! You’re no longer require to provide OBiTalk with your Google credentials. Instead, they’ve adopted OAuth (similar to how you can use Facebook or Twitter to log into random websites).

What to do with my OBiTalk device now that Google Voice is going away?

Chuck it in the trash. I kid. I kid.

For the past 5 years I’ve been enjoying free VoIP calling via Google Voice, starting back when they were known as GrandCentral. Back then, I was using a Linksys PAP2 instead of my current OBi110. But it looks like that party is coming to an end. Google Voice has announced they’ll be ending XMPP support on May 15th which basically prevents any of the current VoIP devices/services from using it.

That May 15th date is approaching and I’ve spent some time researching the alternatives. Since the main selling point of the OBiTalk devices was Google Voice being free, Obihai has recommended the following 2 VoIP services:

They all range from $35-40/year w/ unlimited incoming calls and ~300 minutes for outgoing calls. Not too expensive, but definitely far from free. If outgoing calls are a must, they’re worth considering.

However, I unlike (or maybe like) many others don’t care much about outgoing calls. Plus you can still use Google Voice’s web UI to make long distance calls. If you’re in this scenario, you’re in luck!

The easiest option is to forward Google Voice to your cell phone. You can either do this by installing the Google Voice app or configuring Google Voice to ring your cell phone. That way, people who currently have your Google Voice number will still be able to reach you.

However, I preferred to have Google Voice ring my home phone, so I began looking into options. Before OBiTalk, I had forwarded my GV# to IPKall. IPKall isn’t exactly a SIP provider, but they provide you with a real phone number which would forward to any SIP provider of your choice. Back then IPKall recommended FWD as the SIP provider, but it looks like they’re recommending Callcentric and CallWithUs now. After looking into them, I’ve discovered that Callcentric will provide you with a phone # for free, so you don’t even have to use IPKall.

Signing up with Callcentric was straight forward. After creating your account, they’ll send you an email to confirm your email address. What you want to do next is order a Free Phone Number on their products page. You’ll end up getting a NY phone number which you won’t be using besides telling Google Voice this is the number to forward your calls to. If you specify that you won’t be using this service inside the USA, you can avoid the E911 fee.

Next, it’s time to set up the Obihai device. Log into obitalk.com and select the device you want to configure. Delete your Google Voice service provider and set up a new service provider using Callcentric. You’ll have to select OBiTALK Compatible Service Providers halfway down the page.

The only 3 fields you’ll want to provide here is the area code, Callcentric Number, and Callcentric Password. The Callcentric # is different than your username. You can find your Callcentric # on the left column after logging into your account. Save and let your OBiTalk device reboot.

Next, log into Google Voice and go to Settings > Phones. Add another phone and provide it with the free phone # you just got from Callcentric. Google Voice will now ring your phone and ask you to verify the phone number. After verification, everything is set. You can now receiving incoming calls for free on your Google Voice #!

DynDNS ends free service after 15 years

I’ve been using DynDNS for over a decade, right when when we first got broadband at my house. Back then I’m not even sure we broke 1Mbps down. But with an always on connection, being able to get your IP address without remember the long string of #s was crucial.

About half a year ago, DynDNS sent out an email notifying that in order to keep the free service, we would have to log into our account every 30 days or else our account would expire. It was nice of them to send us reminders several days before the expiration or else I would never have remembered.

Anyway, they sent us an email this morning and posted on their blog: Why We Decided To Stop Offering Free Accounts

It’s hard to blame them when I’ve been leeching off this service for the past 15 years, but I don’t exactly need their pro features either.

So I’ve been looking for alternatives and found out that my ASUS router doesn’t support afraid.org. Here is the list of DDNS services that my ASUS router supports:

  • asus.com
  • dyndns.org
  • tzo.com
  • zoneedit.com
  • dnsomatic.com
  • tunnelbroker.net
  • no-ip.com

While going through that list, I found out that DNS-O-Matic are by the same folks who own OpenDNS and was even more excited to discover that DNS-O-Matic can relay my IP address to a whole lot more DDNS services than my ASUS router. In fact, with DNS-O-Matic, I can now use afraid.org.

Once you’ve created your DNS-O-Matic account and verified your email address, adding services is pretty straight forward. If you’re using afraid.org, it’s going to prompt you to enter a key. The key is the string that follows update.php? in your update URL which you can find here.

If you’re using DNS-O-Matic with your ASUS router, here are the settings you should use:

  • Host Name: all.dnsomatic.com
  • User Name or E-mail Address: (use your username; email won’t work)
  • Password or DDNS Key: (just your DNS-O-Matic password)
  • Enable wildcard: No

Blank hostnames aren’t allowed and if you enter any other hostname, you get back the following error: Request error! Please try again.

According to DNS-O-Matic’s FAQ:

How do I update all my DNS-O-Matic services at once?

Leave the hostname parameter blank. Or, if your software client requires a hostname, send all.dnsomatic.com as the hostname.

Enabling 2-Step Verification on Google

Wow… it’s been a long time since I posted anything on my blog. My WordPress dashboard looks completely different.

What is 2-Step Verification?

For the longest time, I’ve been wanting to enable 2-factor authentication on my Google accounts. Google calls it 2-step verification, but it means the same thing. For those who don’t know what 2-factor auth is, it basically means besides knowing your password, you’ll also need a secondary credential. Banks like to ask you some security questions when you log on from a new computer. Google’s 2-step verification calls you or sends you a text message with a 6-digit code that you’ll also need to enter before logging in from a new computer.

Google 2-Step Verification

You can also install Google Authenticator onto a supported device, which will generate the similar 6-digit code, but w/o having Google to call or text you.

Why the switch?

I’m already using super long randomly generated passwords for any online account that I have, but I’ve been meaning enable 2-factor auth for some time. When you think about all your accounts on the internet, there’s usually some way to reset your password via your email. When someone compromises your email, they’ve basically compromised everything you have online, your online identity.

Recently someone was blackmailed into giving up their @N Twitter account. 2-factor auth wouldn’t have helped in this case, since this was done via social engineering, but it reminded me of all the other account compromising news I’ve read over the year.

Things to keep in mind

Enabling 2-step verification will inconvenience you, probably a lot if you have many devices and services that you use to log in with your Google account.

  • Every time you log into Google from a machine you haven’t before, you’ll be prompted with a 2nd screen to enter a 6-digit security code generated at that time.
  • Every machine/service you’re currently logged into will require you to log back in. That includes mail on your phone, mail on your desktop, messaging services, etc.
  • Certain apps won’t support 2-step verification. In those cases, you’ll have log into Google from browser and generate app-specific password for it.

If you’re willing to put yourself through all this, then continue reading.

Here’s how you do it

Log into accounts.google.com and click on the Security tab. On the left hand side, you should see 2-step verification which should show Disabled. Go ahead and click to enable it.

2-step verification

Go through the setup process, where they’ll basically ask you to confirm your phone number. 2-step authentication is now enabled, but you’re far from done.

I don’t see 2-step verification

If your Google account is from a Google Apps for Business/Education/Hosted Domain, you’ll need to speak with your admin on enabling 2-step verification. I had a trouble following their instructions initially, but it turns out the security icon is hidden inside More Controls, which is hidden at the bottom of your browser if you’re not paying attention.

Once you’ve located the Security option, enabling 2-step verification is pretty straight forward.

Now as the user, you can follow the steps above to enable 2-step verification for your account.

Now what?

I highly recommend downloading Google Authenticator and using that instead of having Google call/text you the 6-digit code every time, especially if you’re stuck in a place with no cell reception (the horror!). Once you have the app installed, go ahead and set it up by following the instructions on the Verification Codes tab of your 2-Step Verification dashboard.

I would also recommend downloading/printing your backup codes in case you lose your phone or no longer have access to it. I have mine encrypted on my machine so I have easy access to it.

Now that you’ve enabled 2-step verification, your mail app has probably complained that your login credentials have been rejected. If it hasn’t yet, it will soon. You’ll need to now go generate app-specific passwords by clicking on that tab.

When generating an app-specific password, Google recommends providing a specific name (e.g. Gmail on iPhone) so you can easily remember which one to revoke in case your device gets stolen. Once generated, go to the app and update the password and things should start working again.

RSS Feed for Hacker News’ Top Links

Hacker News is one of the few news sources I follow daily. They’re a bit like Digg/Reddit where posts are user-submitted, but they focus more or tech and related news.

However, as it’s getting more and more popular, so has the number of submissions. I generally follow this type of news via RSS feeds and their RSS feed just has too much noise. They do have a Top Links list (highest voted recent links), but unfortunately no associated RSS feed.

I’ve been using Daily Hacker News, but I dislike the format where it only compiles a list of the top 10 once a day. I wanted something that would insert a new RSS entry when a new item hits the best of list.

That’s why I’m introducing Top Links | Hacker News RSS Feed: http://feedpress.me/hacker-news-best

Following in the same style of Hacker News’ RSS Feed, the item links directly to the news source and there’s a comments link in the body. With individual RSS entries, this makes it a lot easier to search for items just by looking at the title and now you can save/tag entries for later.

Enjoy!

Direct Links to Apple’s iTunes App Store Categories

If you ever wanted to post a direct link that opens to an App Store category on iTunes, you’ve probably ran into a bit of trouble. If you right click and copy the link from an app’s category, you’d probably end up somewhere like: https://itunes.apple.com/us/genre/ios-games/id6014?mt=8

No problem when you try opening it from a iOS device, but if you open that link on a computer browser, you’ll most definitely be disappointed.

You can change https:// to itmss:// and if the protocol is registered, it’ll open up on iTunes. But Facebook and Twitter don’t really treat itmss links correctly. Facebook redirects to http:// and Twitter doesn’t even linkify it.

I have gone on a quest to find the regular http links that’ll open App Store categories and I have found the following. Basically what you’ll need is the MZStore.woa API which calls viewGrouping and figure out what the group id is.

Unfortunately I didn’t look far enough to figure out some of the newer categories: Books, Catalogs, Food & Drink, Medical, Newsstands

So if you know the direct links to the newer categories, feel free to post it in the comments below and I’ll update my post.

Ruby-1.9.3-p429 hangs when calling OpenSSL

So I’ve been debugging a problem for a better part of today. We first noticed an issue when our test suite was taking forever to finish and it turns out that a certain server we integrated was timing out on every single test. We initially chalked it up to the server being slow, but when we have 10 tests each taking 60 seconds to timeout, it adds a big chunk of time to run our test suites.

To provide some more background, our Rails on Ruby app uses ActiveMerchant to connect to NMI to process transactions. We kept getting the following error: “The connection to the remote server timed out”

The weird thing was that it was only happening on our Macs running OSX 10.8.3 (Mountain Lion), but not on our production server which is running Ubuntu.

So I decided to spend some time debugging the issue. I found out if I switched back to ruby-1.9.3-p392, everything worked fine. I thought maybe my ruby was compiled incorrectly, so I recompiled ruby-1.9.3-p429, but that didn’t seem to fix the problem.

Tracing the code:

  • ssl_post
  • ssl_request
  • raw_ssl_request

which eventually generates an Net::HTTP connection and makes the SSL request.

So I wrote a little test to see what happens:
h = Net::HTTP.new('secure.networkmerchants.com', 443).tap do |http|
  http.use_ssl = true
end

h.post('/api/transact.php', '')

In p392, I would get:
#<Net::HTTPOK 200 OK readbody=true>

But in p429, I would get:
Errno::ECONNRESET: Connection reset by peer - SSL_connect

Searching for that error string, I eventually came upon openssl. I found out that in p429, it had switched to using homebrew’s version of openssl (version 1.0.1e) instead of the system’s version of openssl (version 0.9.8r).

Using openssl 0.9.8r, everything worked fine, but when using openssl 1.0.1e, the connection was timing out and getting the following error:
$ openssl s_client -connect secure.networkmerchants.com:443

CONNECTED(00000003)
write:errno=54
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 0 bytes and written 322 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
---

I contacted the openssl users mailing list and got back the following response:

This is most likely another case of the frequently reported (and discussed) issue that 1.0.1 implements TLS1.2, which has more ciphersuites enabled by default and additional extensions, which together make the ClientHello bigger, and some server implementations apparently can’t cope. It appears in at least many cases the cutoff is 256 bytes, suggesting these servers don’t handle 2-byte length right.

It’s unlikely that this would be explicitly configured on a server, rather it would be an implementation flaw that previously did not cause a problem. It might occur in an older version of server software fixed in a newer version.

For many details see
http://rt.openssl.org/Ticket/Display.html?id=2771&user=guest&pass=guest

Short answer is that restricting to TLS1(.0), and/or a smaller list of ciphersuites (but still enough to intersect with the server), likely works. Both do for me using 1.0.1e to your example host. You can use -msg in s_client to see exactly how much (and what) is sent for different options.

So I tried setting the ssl version to :TLSv1, but that didn’t seem to work. Setting it to ssl version to SSLv3 did though.
http.ssl_version = :SSLv3

Following the example from Forcing SSL TLSv1, I was able to override the ssl_version of the http connection that ssl_post creates:

class SSLv3Connection < ActiveMerchant::Connection   def configure_ssl(http)     super(http)     http.ssl_version = :SSLv3   end end def new_connection(endpoint)   SSLv3Connection.new(endpoint) end

Reminder Issues in iCal with Google Calendar

I’ve been using Fantastical on my Mac and iPhone as my default Calendar application these days and it’s been wonderful, or should I say fantastical. Anyway, I recently hit into a weird issue that I initially thought was a bug in Fantastical, but upon further investigation appears to be a bigger issue between Google Calendar and Mac’s Calendar app (previously known and will be referred later in this article as iCal).

I noticed whenever I tried changing reminders in Fantastical, despite the change appearing to be successful, if I reopen the event, the reminders are reverted. Not only that, when I create a new event in Fantastical, it always uses the default reminders, despite the default being no reminders. Even when I add a reminder before creating the event, my reminder gets removed and the defaults are used.

I tried to create a simple event in iCal and was able to set the reminder that I wanted. So I contacted @flexbits (maker of Fantastical) on Twitter and they basically told me the reminder settings get overridden by Google.

So I did some more testing and here’s what I actually found:

  1. In iCal, you can set the reminder you want when creating the event, but once the event is created, you can’t change the reminders anymore. It’ll appear that any reminder you modify, add, or remove has succeeded, but if you open up the event details again, you’ll find the reminder settings have been reverted to what it was previously.
  2. Despite the reminder settings being reverted in iCal, Google Calendar actually shows the reminder changes that you’ve made. So there appears to be some sync issue between iCal and Google Calendar where iCal can push up reminder changes, but can’t pull those changes down. Not sure if the bug lies in Google’s implementation or iCal’s implementation.

Since Fanstaical always uses the default reminders, it leads me to think that creating an event is a multi-step process in Fantastical, where adding the reminders is a latter step, therefore it gets ignored.

Don’t have a solution here. Just wanted to share my findings. Seems like if you want to use Google Calendar and not use the default reminders, your best bet is to create the events in iCal and set the reminders you want.

get_template_part_content for Single Posts

I’ve been looking for a non-intrusive way to insert text before any of my WordPress Single Posts and Pages. In the past, I modified the themes directly, but since I didn’t own the themes I was using, anytime I updated the theme, I would lose all my changes.

I had found I can insert text at the top and bottom of my pages using add_action. For example:

add_action( 'get_footer', 'echo_hello_world' );

function echo_hello_world( $post )
{
   echo "hello world";
}

would add the phrase “hello world” to the bottom of every single page on my blog. You can find a list of tags on WordPress’ Action Reference page.

One of the things I wanted to do was insert some text beneath my header, but before my post. I had found get_template_part_content would do the trick on Pages, but not Single Posts. After I failed at finding a solution online, I looked into how the code differed between Pages and Single Posts.

Single Posts were calling:
get_template_part( 'content-single', get_post_format() );

while Pages were calling:
get_template_part( 'content', 'page' );

So I wondered if get_template_part_content_single would work, but nothing happened. I then tried get_template_part_content-single, lo and behold, it WORKED! Given that Google has 0 search results with that exact phrase, I’m guessing no one’s discovered this action tag or at least posted about it publicly.

So my code ended up looking like:

add_action( 'get_template_part_content', 'echo_hello_world' );
add_action( 'get_template_part_content-single', 'echo_hello_world' );

function echo_hello_world( $post )
{
   if( is_single() || is_page() )
   {
       echo "hello world";
   }
}

As a bonus, here’s a tag you can use to insert text between your post and the comments section: comments_template

Greyhole (Alternative for Windows Home Server Drive Extender)

Greyhole Storage Pool

Previous Storage Solutions

So I had used Windows Home Server when it first came out and loved their Drive Extender feature. It allowed you to add as many drives into a pool and then create shared folders. You can specify how many copies of a file you wanted for each shared folder, therefore providing redundancy. I was extremely sad when they announced that Drive Extender would be removed from Windows Home Server 2011, meaning I would have to look for a new solution.

In the meantime, I had switched to using Macs and had began using SoftRAID as my storage solution. Ultimately, I was highly disappointed with the software given its price. I had set up 2 volumes, a 6TB RAID-0 array and a 3TB RAID-1 array. Both arrays consists of 2 external USB 3TB drives. Things worked great as long as you didn’t reboot. But when you did reboot, there was a 25% chance that the USB drive wouldn’t be mounted in time before SoftRAID times out and marks the array as degraded. Things would be fine if you could just re-add the drive to the array, but unfortunatey it didn’t work like that. It required you to reinitialize the drive and add it as if it was a brand new drive to the array. The rebuild was the worse part as it took 2 days for the drive to be fully rebuilt.

I’ve sent numerous emails to their support requesting the ability to set a drive to read-only until the mirrored drive is remounted or replaced, but it’s fallen on deaf ears. And I’m not the only one that’s hit into this problem. I’m happy to say I’ve officially uninstalled and completely removed SoftRAID from my machine.

Awhile back, I had read about Greyhole being an alternative to Windows Home Server Drive Extender, and finally decided to check it out.

Building the Rig

I decided that I need to build a new rig for my storage server, mainly due to the fact I wanted to take advantage that my external USB drives were USB 3.0.

I had picked up a barebone combo from Newegg a couple weeks ago for less than $200, which included BIOSTAR Hi-Fi A85W, AMD A10-5800K Trinity 3.8GHz, G.SKILL Ripjaws X Series 8GB 240-Pin DDR3 SDRAM. The main reason this combo worked out was because it included everything (sans case and power supply). It also had 8 on-board SATA ports as well as onboard graphics. It’s a pity it only came with 1 stick of RAM, but do I really need more than 8GB of RAM for a file server? There was some free headphones too, but that’ll probably end up in my pile of useless junk in my garage.

I had also picked up a USB 3.0 motherboard adapter, giving me 2 extra USB 3.0 ports.

USB 3.0 Hubs

I don’t know why, but USB hubs tend to fail when dealing with large amounts of data transfers. I’ve tried at least a dozen different USB hubs, and the only one that has work consistently was this Belkin 7-port USB hub, but unfortunately this only supported up to USB 2.0.

Even after the USB 3.0 motherboard adapter, I now had 4x USB 3.0 ports, but 6 external USB drives. So I decided to try Monoprice’s 4-port USB 3.0 hub and am sad to report that it also fails under the high bandwidth scenario. By fail, I mean drives just disappear from the system, and that’s not good for a storage server.

So I ended up connecting 4 of the drives directly to the motherboard’s USB 3.0 ports and 2 of them to USB 2.0 ports. If someone has a good USB 3.0 hub recommendation, I’m all ears.

Installing Amahi + Greyhole

Why Amahi? Greyhole is the storage pool software, but Amahi provides a decent user interface on top of it. Since Amahi recommended Ubuntu 12.04 LTS, that’s what I installed. Ubuntu’s install was rather straight forward. I had to burn a DVD for the first time in a long while. Apparently getting Ubuntu to install via a USB thumbdrive isn’t very straight forward.

Following their instructions, I had a bit of a problem getting Amahi installed. It would failed randomly throughout the installation process, but the third time was the charm. I forgot what I did to fix it, but recall installing some dependencies it was expecting. It also probably explains why Greyhole wasn’t setup properly and I had to rerun the initialization scripts to setup the mysql databases.

Adding Drives

Following their instructions, I got all my drives setup and auto-mounted. I recommend appending nobootwait in your options, so if Ubuntu fails to mount your hdd, it won’t just hang. It’s unfortunate that it will block before the ssh daemon is started, providing no way to fix this remotely, even if you just want it to continue w/o mounting the drive.

I would also recommend physically labeling your drives to match up with their mount location (e.g. drive1, drive2, etc.) so you know which drive is which when it dies and know which one to replace. Fortunately with Greyhole, even if you accidentally remove the wrong drive, sticking it back in is not that big of a deal.

Once the drives are added, they should show up on your Dashboard > Storage:

Amahi Storage

Setting up Greyhole

By default, Greyhole is disabled and you’ll have to enable it in advanced settings. Once enabled, more options under Shares should show up. First you’ll have to select which drives you want to be part of the storage pool. As you can see from screenshot at the top, I’ve enabled everything beside my main OS drive. Adding the main OS drive to the pool is not recommended as you don’t want to run out of space on that drive.

New options should also show up for the shared folders:

Greyhole Shared Folder Options

As you can see, I enabled this specific share to use the pool and to have 2 extra copies (so a total of 3 copies of each file on 3 separate drives). The # of extra copies you can set ranges from 0 to max (# of drives in your pool).

Connecting to Shared Folders

Greyhole exposes these shared folders as SMB/CIFS network drives, very much like Windows shared folders. By default it adds an hda machine name to your network, so to connect to it on a Mac, you’ll connect to smb://hda/name_of_share.

I’ve found that on the Mac, if you want it to automatically connect to the network share, the easiest way is to drag and drop the share to your list of login items in Settings > Users & Groups. Unfortunately this has the side effect of opening all the share windows when you login. In my case I have about 10 different share folders that get opened upon logging in.

How Greyhole Works

One of the things that confused me early on was the fact that after making changes to a particular share, Greyhole didn’t seem to do anything. Logs showed that it was just sleeping, even though I just told it to add an extra copy of everything in this non-empty share.

It turns out Greyhole is a daemon service and it will act when new files show up, but any settings changes you make to an existing share don’t actually execute until it does it’s nightly file system check. I’ve learnt you can manually trigger the file system check by running sudo greyhole -fsck.

When you copy files into a share, they all get dumped into a default location (configurable). The greyhole daemon service will kick in and depending on your settings, begin moving or duplicating files onto drives designated to be part of the pool.

I do have to warn you never to run sudo greyhole -D as it starts a new instance of the Greyhole daemon which confuses the shit out of each other. When one service was copying a file, another was deleting it. When the 1st service sees the file is now gone, it assumed the user deleted it and deletes all its copies. Good thing I always double check with rsync to make sure all copied files are good.

What you should do instead is: sudo service greyhole start | restart | stop

Greyhole also has an option to balance available space among the drives, which I assume means if you added a new drive to the pool, it would shift the files around, so you won’t have one drive that is fully packed, while another is completely empty. The command to do this is sudo greyhole -l, but I haven’t really seen it do much.

Removing / Replacing Drives

One of the best benefits of Greyhole / Drive Extender is the ease of adding/removing drives. I’ve already discussed about adding drives, and removing drives isn’t that much more work. If a drive is still working, but will be removed, you mark it as “going”. Greyhole will begin to move the files off of it onto other drives in the pool. If a drive is dead, you mark it as “gone”. For files that don’t have extra copies, unfortunately those aren’t recoverable. For files that do have duplicate copies, Greyhole will begin duplicating them off other drives to ensure the # of copies matches your settings.

Once the process completes, it’s safe to remove the drive.

Trash

You ever get that dreaded warning when you’re deleting files from a network share warning you that deleting this file will be permanent? Yeah, I always double check to make sure I didn’t accidentally select the wrong file (god forgive, wrong folders). With Greyhole, when you delete a file, it gets moved to a Trash shared folder, which you can mount and access like your recycle bin. To empty the trash, just run sudo greyhole -a

Monitoring Greyhole

There’s a couple ways to monitor what Greyhole is doing.

  • watch greyhole -q displays your list of shares and # of operations queued.
  • greyhole -s shows you the stats of your drives (e.g total/free space)
  • greyhole -S shows you a summary of what Greyhole is doing.
  • greyhole -L tails the log file telling you exactly what it’s doing.

Remote Access

A couple things you want to install right off the bat are openssh server and vncserver.

I found it rather weird that a linux distro didn’t have an ssh daemon installed by default. It does have vnc server, but it requires you to login at the console before you can “share desktop”. The reason why you need VNC is because the Amahi dashboard is only accessible on that machine. For most Greyhole functions, you can control it via SSH.

I had a lot of trouble getting the VNC server working (w/o having a user logged in). Unfortunately detailing how I got it to work would be another giant post in itself, but if you have questions, leave a comment below and I’ll try my best to answer.