Apple’s New Swift Language and NetCDF

With the release of Xcode 6 and new OSs, Apple is pushing the use of a new language: Swift. Swift, unlike the venerable Objective-C, is meant to be a “modern” language, which means moving away from much of the computer language world that’s based on C. You can argue that the classes, structures, typedefs are all more powerful in Swift than in the C/C++/Objective-C world. Furthermore, Swift mostly hides one of the biggest banes in C programming: the pointer.

However, the pluses of Swift also present a bit of a problem: how to interact with C-based languages. Apple did spend some time making Swift compatible with Objective-C and C (no joy for C++, at least not yet). Objective-C is quite easy to incorporate with Swift. After all, Objective-C has been Apple’s primary language since transitioning to Mac OS X. C, on the other hand, is also compatible with Swift but appears quite tricky to a Swift newbie, like me.

I thought I might do a bit of work to understand how Swift might work with C. So, my question is “can I access NetCDF from Swift?” NetCDF is a file format for storing large datasets. It’s powerful and quite useful in the work I do. There are several forms for the library, C, C++, and Java. The only one of these compatible with Swift is the C library. The C library is an API that is loaded with pointers: exactly what Swift is designed to hide.

It is possible to make this easier. I could just use my own Objective-C interface to the NetCDF library. The work is done and compatible with Swift. It doesn’t feel clean to me, though. If Apple really wants us to eventually migrate to Swift, then why not access NetCDF from Swift? Indeed, I work with many libraries that are C-based APIs. So, avoiding working with these types of libraries doesn’t solve anything. Instead, it’s time to dive in and get my hands dirty with Swift.

Installing NetCDF

Installing NetCDF is fairly easy. Download the source from unidata (http://www.unidata.ucar.edu/downloads/netcdf/index.jsp), configure (I configured with NetCDF 4 and open dap turned off, but I’ll use the NetCDF 3 format), and compile. Since I’m not a fan of having to install libraries all over the place, I used the static C library in my app. You must also configure xCode to use the library and to be able to locate the header.
The final requirement for using NetCDF in Swift is making the library accessible to Swift. You must include the following in your app’s Bridging Header:

#import <netcdf.h>

Now, Swift can see the netcdf API. If this #import statement is not in the bridging header (which xCode can create for you), then you will not be able to access any of the APIs. A bonus is that autocomplete of netcdf function calls will appear as Swift calls, which greatly helps the process.

Opening a File

The first step in opening a NetCDF file is, well, opening it. The C command for opening a NetCDF file is:

int nc_open (const char *path, int omode, int *ncidp);

Where path is a string representing the path to the file, omode is a integer value telling the library how to open the file, and a pointer which will return an ID value which will be used to identify the open file. It returns an error code. Fairly straightforward C.

In Swift, the command looks a little different.

nc_open(<#path: UnsafePointer#>, <#mode: Int32#>, <#ncidp: UnsafeMutablePointer#>) -> Int32

The Swift version is essentially the same, but with different details. The path string is now called an “UnsafePointer”, which means that the string can’t be changed and has an unknown size. The pointer for the ncidp is now labeled as an “UnsafeMutablePointer”, which means it’s an unknown size and the data can be changed. The unknown sizes are what makes the pointers unsafe: in theory overruns are possible which can be dangerous. Swift is meant to avoid this problem, yet you still have to work with C.
First, how do we fill the ncidp pointer? This one is relatively easy. We create and initialize a variable for hold the ncid:

var ncHandle: Int32 = 0

We assign the type to make sure we’re compatible with the command, in the case of the ncHandle, it’s a Int32. We have to be specific since the “Int” type can be a difference size on different machines. On 64-bit machines, the Int type is 64-bit, but the NetCDF library is expecting a 32-Bit integer. Since we can’t be uninitialized in Swift, we set it equal to 0 (perhaps it might be better to assign a negative number).

The tricky part here is the path. It must be a C string, not a Swift or NSString class. In my case, I’m starting with a NSURL (I get the path from a dialog box). So, we must get the NSURL into a C String. The way I’ve done this is the following:

let thePath = URL.path?.cStringUsingEncoding(NSUTF8StringEncoding)

Where we tell the URL we want the path and convert it into a C String using the UTF character set. Note the (?) after path. That’s an example of optional chaining. We are calling a property of URL (path is a property) and it’s possible that path could be nil. We need to fail gracefully, so the optional chaining unwraps the path (in Swift, a variable can have a value or be nil and unwrapping gets you to the value) or fails.
Now, we can put together our call:

var ncError = nc_open(thePath!, openMode, &ncHandle)

Notice in the above I still have to use an “!” with thePath. Since thePath is an optional, which means thePath actually represents a container that could have a value or be nil), I have to use thePath’s value, which I can “unwrap” with the “!”. However, it’s better to check to make sure that thePath isn’t nil. So, we can change it up:

if let thePath = URL.path?.cStringUsingEncoding(NSUTF8StringEncoding)
{
var ncError = nc_open(thePath, openMode, &ncHandle)

if ncError == NC_NOERR
{
//we’ve successfully opened the file.

}
}

Here, we error check immediately when we create thePath. If thePath is nil, this code won’t run. Another advantage is that thePath is already unwrapped by this if statement, so we no longer need the “!” in the nc_open call.

Querying Dims

Because Swift is meant to be safe, we have to spend a bit of extra time building up information held within a NetCDF file. Let’s start with dimensions (dims). Dims in the file are what’s used to define the size of our datasets. They each have an ID, length, and a name. We must collect all of this for an unknown file, but we should collect the data for known files as well.
First, we need to know how many dims exist in the file. In the C library, we would call:

int nc_inq_ndims(int ncid, int *ndimsp);

The ncid is the file ID we created earlier, the ndims pointer is where the call will store the number of dims in the file.

In Swift, the call looks like:

nc_inq_ndims(<#ncid: Int32#>, <#ndimsp: UnsafeMutablePointer#>)

So, in this case, we have to come up with variables for the number of dims and their ids (an array). The number of dims is easy, you simply need an Int32 variable set to 0:

var ndims: Int32 = 0

However, the array takes a bit more work:

var dimids = [Int32](count: Int(NC_MAX_DIMS), repeatedValue: 0)

Here, we have to have a buffer big enough to hold the maximum number of dims. Notice that we’re using the macro NC_MAX_DIMS and making sure it’s typed correctly. We’re also seeding the array with 0 at each point with the repeatedValue.
Now we can make the call:

var ncError = nc_inq_dimids(ncid, &ndims, &dimids, 0)

Similar to normal C, we’re passing pointers for ndims and dimids.

However, we’re not yet done with dims. What we still need are their names and lengths. We can get that information using the dimids and the following C call:

int nc_inq_dim (int ncid, int dimid, char* name, size_t* lengthp);

And in Swift:


nc_inq_dim(<#ncid: Int32#>, <#dimid: Int32#>, <#name: UnsafeMutablePointer#>, <#lenp: UnsafeMutablePointer#>) -> Int32

Assuming we didn’t get an error getting our dimids, we can now step through all of the dimids and build up our list of dim names and lengths. So, we create our string variable:

var tempString = [CChar](count:Int(NC_MAX_NAME+1), repeatedValue: 0

And size variable:

var length: size_t = 0

>

But we also need a place to store all of our values, so let’s create some empty arrays

Reflections

At this point, things started changing for me. I started to getting the hang of Swift, at least the parts of swift I needed. It was time to start over. Now, instead of developing a class for specific files, I decided to at least least start a Swift version of my Objective-C interface for NetCDF. I’m not going to do the whole thing immediately, as that’s too much work for the moment. The first goal is to instead just build a skeleton framework for reading NetCDF 3. However, I’m not going to go over the details of this here. But, I do want to reflect on my first reactions working with Swift.

Swift’s limitations were becoming clear at this point. I really have to ask myself whether the Swift-native version of my NetCDF framework is better to use than my Objective-C version. Right now, I’d say stick with the Objective-C. There are several reasons for this:

1. Swift is almost entirely dependent on the Objective-C class structure and APIs. While you can do a little without any Objective-C, you’re often still stuck making Swift classes that are subclasses of Objective-C classes. As such, Objective-C works just fine with Swift. Why make too much work? Just use existing Objective-C APIs in Swift.

2. At present, the best place for Swift, based on my efforts, is in realm of view controllers and views. Swift can really speed the process of developing user interfaces. Deeper logic where you have to push data around can be a bit more difficult in Swift (and you drop down to Objective-C classes anyway).

3. Fear, uncertainty, doubt… FUD. I’m going to spread a bit of FUD here based entirely on my own fears, rather than anything I’ve directly heard. I started developing on MacOS X fairly early, and I remember there were two languages you could use: Objective-C and Java. Indeed, Steve Jobs once claimed they would make the Mac a top-notch Java development platform. The Java bridge support was discontinued, however, in 2005. It was no skin off my nose; I never used the Java bridge. I have no idea what ultimately led to this decision (maybe lack of use?), nor am I not sure how many apps used the Java bridge. Will Swift have the same fate? Given that you can do very little without bridging back to Objective-C, it does seem tenuous at best. If I’m right, however, that interface work is better in Swift, then Swift might survive long enough to become a powerful language on it’s own. Time will tell, but it’s a risk to write production code in Swift and it’s a risk not to write production code in Swift (Apple might fully transition to Swift… maybe).

Drobo update – 5 years in!

Wow,

I’ve just checked my records.  I bought my Drobo 5 years ago, already.  Since then, I’ve probably had, on average, 1 drive failure per year.  That’s pretty scary to think about.  If I hadn’t been using the Drobo (and some of my backup routines), I would have lost a huge amount of data.

My Drobo is the original model, USB 2.0, 4-bay machine.  Over the time I’ve used it, the device cost me about $70 for the initial purchase, but I’m not going to figure out how much additional cost, including power, though.   Not bad.  It would cost about the same, however, to use a more modern online backup system

Surfing around the net, I found that some users haven’t had a good experience with the Drobos.  Indeed, my experience has not been perfect.  First, there seemed to be a promise that I could use much larger drives in my Drobo than in reality.  I’m not sure if that promise came from Data Robitics (makers of Drobo) or other reviewers.  But, in reality, it maxed out at 5.42 TB.  It’s a bit small for my needs these days.  My system at the moment is 88% full and I hesitate to add much more.   So, I’ve been going back to individual drives for my main storage (I forgot how terrible that can be).  Indeed, I had a hard drive failure that didn’t occur in my Drobo (fortunately, it was backed up).  Still, it feels incredibly risky not to have my primary storage on the Drobo.

The second problem with my Drobo (remember, it’s the original) is that it’s very slow.  Much of the data I’ve placed on the machine has become unwieldily to use.  iPhoto, for instance, is now painful to use when the library is on the Drobo (well, it’s also on a remote computer so that’s not entirely fair).  Rebuilding after a drive replacement or failure can easily take a week or two (during all that time the system is not protected – again, scary).

There have been a mess of other small problems as well.  For example, sometimes when my computer reboots, the drobo is not recognized and the only thing I can do is pull the plug (no off switch!).  The worst situation like that I had was when the computer rebooted while the drobo was rebuilding – I didn’t have any access to my files for a week or two.  Scary!

So, my experience was not perfect, but it saved me enough that I decided to buy a new Drobo.  In fact, it’s a Drobo 5N.  Unlike the previous model, this one is ethernet, rather than USB, 5 bays, rather than 4, and it can use much larger hard drives (at least 4 TB).  Importantly, I can use the extra drive bay in two ways – additional storage OR a second drive for protection (should a drive fail, the system will still be protected).  Right now, I’m not using that feature (I need the drives current tied up in the old drobo), but I plan to do so ASAP.   The only problem with that solution is that I’ll be back down to the 5.42 TB I already have in the old drobo.

Given the nature of the two Drobo systems (one being USB and the other Ethernet), I’ll have to rethink some of my workflows.  Right now, I need to test for the ethernet model to preserve file metadata, such as creation dates and tags before some of the workflows are moved.

I haven’t as of yet tested the speed of the 5N.  There are a couple of reasons it doesn’t make sense at the moment.  First, my mSATA drive has not yet arrived.  This drive is a flash drive that will speed the performance of the 5N.  Furthermore, I’m currently working  to transfer data from the old drobo to the new, which will be slow no matter what.  So, no point.

What will happen to the original Drobo?  I’m planning to repurpose it.  It will become a local storage device for one system and will likely contain much smaller drives, at least for a while.

Sadly, I’m expecting my transition from the original Drobo to the 5N to be quite slow.  I suspect at least a couple weeks, although I’m expecting it to be fully useable within a few days.

I’m hoping for another good 5 years with the 5N

 

 

 

Astonomy Videos

 

Cosmos –

Eratosthenes

<iframe width=”420″ height=”315″ src=”//www.youtube.com/embed/G8cbIWMv0rI” frameborder=”0″ allowfullscreen></iframe>

Cosmos – Kepler and  Brahe

Cosmos – Kepler’s Laws

<iframe width=”420″ height=”315″ src=”//www.youtube.com/embed/XFqM0lreJYw” frameborder=”0″ allowfullscreen></iframe>

Cosmos – Kepler’s Persecution

<iframe width=”420″ height=”315″ src=”//www.youtube.com/embed/-CE4owAfDow” frameborder=”0″ allowfullscreen></iframe>

 

 

 

Apple updates lots of stuff…

Updating software, particularly operating systems, can be scary.  Problems I’ve encountered in the past include failed updates, data loss, bad changes in functionality, and buggy software.  When you get a lot of software updates at the same time… well, that’s very scary.  So, along with releasing Mac OS X 10.9 Mavericks, Apple also released iPhoto, iMovie, Pages, Numbers, and Keynote for both iOS and MacOS.  That’s a lot of FUD!

I’ll try to post my thoughts on each as I experience them, but for now, I’ll concentrate on a couple things.

First, installing Mavericks…  Take the advice posted elsewhere… backup and be ready for the worst but hope for the best.  I installed Mavericks on 5 machines since the release.  It took almost a week to get them installed since i took some baby steps to ensure mission-critical software still worked.  Of the 5 machines, 4 installed without any hitches  (hint: If you have multiple machines, backup the installer and copy to any machine you want to update so you don’t download too much data).  The one installation hitch was on a mac mini.  In this case, the install hung fairly early in the process.  After a bit of searching on the web, I found that if you zap the PRAM, the install might work.  It did for me.

The biggest new feature in Mavericks for me is the improved multiple-display support.   Now, the displays are mostly independent from each other.  The benefit the support brings is, at least for me, the return of full-screen apps.  Running a full-screen app in Mountain Lion would take over both displays, the window would be on one display and you’d get a linen covered second display.  Now, you can have a full-screen app in one display, and still work in the second.  After just a few hours, I started putting all full-screen apps in my big display, and apps where I don’t want full screen, including the finder, in the other.  It’s quite a powerful combination.  Mail and Safari (and Chrome when I use it) are almost aways full screen.

There are serious issues, though.  Sometimes I can’t get the menus or dock to show on the full-screen display.  Sometimes the only way to get it back is to quit the app (I haven’t found any  consistent rules, though).  Some apps don’t play well, either, like iBooks.  iBooks under the right conditions won’t display movies correctly in full screen mode.  Still, it’s better than what we had.

The new Numbers looks nice, but in some ways, it’s difficult to use.  For example, I’ve been trying to make a simple scatter plot with more than one data set.  Try it…  Did you notice you had to turn off “Share X Values”?  Do you know where that is?  Did you realize your data has to be in multiple tables (as far as I can tell, at least)?  I almost totally abandoned numbers over this issue alone.  Even now knowing how to create a scatter plot, I still might drop numbers from my line up just for this.

 

Going paperless: an update

It’s been a long road, and a long road remains ahead. Given the power of computers, we were promised the “paperless” office. It never happened. I suspect like most people I have more paper to deal with than ever.

A long while ago, after a major move, I realized I had a lot of paper. After my wife then moved more boxes into our home, I realized we had A LOT of paper. We had to do something. I bought a discounted all-in-one printer/scanner and started on my documents in the basement. Although I’ve never quite got everything done, our basement has a lot more room… to fill up with more paper.

Last month, I looked at the crystal ball and saw what was coming… tax time! We didn’t have any good procedures in place to keep up with our finances this year (as usual). We start well, but when we hit some sort of bottleneck or roadblock we tend to not do a thing until we have to. When tax time rolls around, then it’s a month or two of pure panic trying to get things in order. Our routine has been sort papers, do a bit of entering, put papers away, resort papers, enter, put papers sort of away, and around it goes and it becomes a big mess that we never really clean up.

This year, I decided the paperless route is the way to go. Sort papers only generally to scan, them put them away and don’t bother with them until you archive or destroy them. But how to do it? Luckily, a software developer provided a copy of the digital book “Paperless” by MacSparky (aka David Sparks). It’s a great book with lots of videos to explain how to do things. It was certainly a leg up, not only in software and equipment, but also thinking about how to do things right. That said, his ideas didn’t entirely suit my issues, and I suspect the same for most people, but it will definitely help you on your way. It’s available as an iBooks book or PDF.

Here are the lessons I’ve learned thus far:

  1. FUD (Fear – Uncertainly – Doubt). FUD is an term used often to describe someone’s negative opinion when it tends to be unwarranted. Here, I use it to describe what will kill your will to do anything like going paperless. If you fear your system, if you are uncertain about how to do things, or have doubt whether it’s working – you will fail if you don’t find a way to identify and fix the problems.
  2. Automate, automate, automate! The greatest source of FUD is your brain. Minimize the number of things your have to do and let the computer do the rest. If you have to do everything (scan, OCR, rename, move to a folder, deal with the paper) you’ll go mad and make lots of mistakes. Let the computer take some of that load off of you. You might need to learn computer automation for your system or buy software solutions to help, but it’s worth it.
  3. Capture everywhere! Once my initial scanning is done, the reality is that we won’t keep it up if we don’t make it easy for ourselves to do. In the “Getting Things Done” mindset, there is the concept of “ubiquitous capture”, or the ability to collect needed data almost anywhere. It’s a bit harder to scan something anywhere, but it’s easier than you might think. We have scanners in the office, kitchen, and for elsewhere we have cameras, such as a new iPod Touch. Furthermore, each of the solutions that are outside the office can automatically (or is it automagically) get the scans onto the computer and some initial processing (such as OCR) without me doing much of anything. If you can’t capture in as many places as possible, those holes will often create more FUD.
  4. Get a stamp! I recently bought a self-inking stamp that simply says “Scanned”. This, on the face of it, seems a little excessive. However, not instantly knowing whether or not a document is scanned causes FUD. Believe it or not, I have had documents go through my system in the last month that are very very old, possibly even scanned. I simply did not know if they were scanned or not and it is sometimes quicker to scan than to hunt it down. Today, if I don’t see a stamp, it’s not scanned.
  5. Trust, but verify. Like many people, I know that computers can seriously mess things up. Don’t ever assume that after you scanned a document, it successfully made it through your system. Make sure that it does. Don’t stamp it until you’re sure!
  6. File Creation Date. If you can set the creation date of a file, set it the date the original document was created. In fact, duplicate that in the file name (see 7 below).
  7. Name (and tag) things consistently. Work on a good file naming convention and stick to it. If you use tags (I use the OpenMeta standard on a mac), be consistent there as possible. Keep in mind that tags and file names, however, are sometimes fragile and can be altered unintentionally. So, be aware of potential pitfalls. My convention looks like this:
    2013 – 01 – 01 – meijer, groceries, ccMYCARDID.pdf
    I start off with the date of the receipt or document (2013 – 01 – 01 -), the store or source of the document (meijer), then categories (groceries), and finally, if it was a payment, some way to identify a credit card or cash (The “Paperless” book is the source of at least the date part of this system). The name can get excessively long, but on most modern OS systems, it shouldn’t be a problem (and the files will sort themselves in order by date even if you don’t set the creation date of the file). Believe it or not, I don’t manually set the file name, I use an automated system. When my automation triggers to put a file in its place, it reads the creation date and the list of tags and renames the file accordingly. I don’t have to think about the file name at all. The “Paperless” book shows a few ways to do this automatically using OCR’d text, but his exact system doesn’t quite work for me.
  8. Identify bottlenecks. My main bottleneck is my email (yes, the already paper-free part of the system). I get hundreds of email every day. In the middle of all that, there are some things I need to put into my system, like bills or receipts. If I don’t deal with them right away, then they get lost in the weeds. Worse, once I do find them, I don’t have a way to mark them as “scanned” or “filed”. I need to fix this. I could switch to google mail and their web interface (which I don’t like that much), switch email apps, add plugins to my existing app, or set up some complex smart folders. I’m not really happy with any of those, but I’ll have to make a decision soon.
  9. Backup. Well, duh! Oh, wait, I haven’t done it yet! Once it is, though, I’ll be making several copies to stash around to protect the data. Given the importance of the data, it would be bad to lose even a small portion of the data.

Software and tools

I’m a mac guy, so much of the below is mac-related software and hardware. However, some of the below will also be useful to other platforms.

1) Data Storage

My document storage device is a Drobo. A drobo is similar to a RAID, but it is a bit simpler to use. The Drobo is designed to limit the damage of drive failures. Believe me, I’ve had drive failures (at least 3 in the Drobo itself). While it protects against drive failure, you still need to backup.

2) Backup

I use spare hard drives and optical disks (although I’m moving away from optical). For hard drives, I use an external drive dock – I can just plug in a hard drive as if it were a floppy and back up the data. I am considering cloud-based backups, but I have enormous amounts of data and it doesn’t yet seem that cost effective in my case.

3) Backup storage.

I use 2 fireboxes and each is stored in a different location away from my home. Documents and photos are extremely important. No point in tempting fate.

4) Scanners and Cameras

You can use both scanners and cameras to ‘scan’ documents. Scanners are a bit more precise, but a good camera does the job too. Here are the devices that I currently use:

  1.  Epson Perfection 1660 Photo – an old, discontinued flatbed scanner I often keep in the basement in storage. I bring it out whenever I need to do a lot of scanning next to my computer.
  2. Epson Workforce 645 All-In-One – It’s a great device which has a document feeder and can do duplexing on certain document sizes. This is a great workhorse device to capture many normal documents
  3. Doxie Go – This is a small, battery operated scanner. Doesn’t do duplexing and it doesn’t have a lot of options. This is our “kitchen” scanner. The Doxie’s role is to allow for capturing of important mail and receipts as they come in. Using a standard memory card, I’ve been able to scan between 100 to 170 receipts in one sitting. If I use a wifi-enabled card (which we do while it’s in the kitchen), I get only about 40 to 50 scans before needing recharging. This isn’t a surprise, since the wifi takes a bit of power to operate. It’s certainly not a workhorse scanner, but not needing a computer at all to operate is handy.
  4. Pod Touch (5th Gen). The new iPod has a decent-enough camera to “scan” receipts. It’s harder to use on full size documents, but can still do the job. The advantage of using the iPod as a scanner is that I can use it anywhere (in the house or out) and scan things the moment I get the receipt (in case I lose it!). I don’t recommend older iDevices since the camera isn’t particularly good for this role.
  5. High-megapixel cameras – while we haven’t tried it, we have a couple of cameras 3 megapixels and up. In theory, these could do the same job as the iPod and “scan” documents. This approach may require mounting the camera in order to stay stable. Plus, the same wifi card that is in the Doxie can also be used in the camera.

5) Cloud Services

Dropbox. Dropbox is essentially an internet sharing tool that allows you to share with all of your devices (and with others if you choose). The iPod in particular uses this service. When I “scan” a document with the camera, I send it to a specific folder in dropbox. After a few minutes of internet and computer magic, a scanned and OCR’d version of the document is ready for tagging and filing. it would, of course, be better if I used an iPhone or an Android phone for this, but I don’t have one.

Eye-fi. I originally didn’t think of this as being a cloud service, but it really is. An eye-fi is an SD Memory card with built in wifi. Once it connects to your network, it will upload all the new images to the eye-fi site. You files will then make their way down to your computer. The nice thing about this is that we can use the eye-fi for both photography and scanning. However, it mainly lives in our Doxie Go.

6) Software – iOS Based

There’s only one bit of software I’m using in the system on iOS – Jot Not. It’s a app designed to taking pictures of documents and then allows you to transfer those documents to other services like Dropbox. As a matter of fact, I was waiting for my wife in the car today and asked for any receipts she might have… boom! done.

7) Software – Mac

OpenMeta – OM isn’t really software, it’s an open standard for tagging files on the Mac. It is not supported by Apple and not all applications will respect the tags (i.e. if you do something to the file, you might lose your tags). There are a number of apps that support OpenMeta, some are free. However, since this is my main method for tagging files, I try to use apps that will support, or at least respect, OM.

PDF Pen- PDF Pen is a step up from the standard PDF viewing app on a mac. Most importantly for this system is that it will OCR documents (and is apple scriptable). PDF Pen will also scan files. However, it doesn’t always respect tagging. As a result, I do all the work needed using PDF Pen BEFORE I tag the file.

Yep – I’ve been using this app off and on for years. It is a PDF manager and viewer. It’s very useful in particular to organizing documents. You can OpenMeta tag and set the creation date within YEP. Yep also will scan documents. Yep is my choice when I’m scanning non-standard document sizes or shapes.

Hazel – Hazel is one of those apps I’ve tried over the years and never found a use for it. However, that’s changed. Hazel is, according to the website “Automated Organization for your Mac”. Exactly what you need to file documents. It watches different locations on your system to see if anything matches a rule, if it finds a match, it executes a rule. I use this functionality in two ways. First, I use it as a funnel for new documents. Since I capture from a number of sources, they tend to appear on my system in different places. Should a file appear in one of those places, it converts to PDF (if needed) and sends it to another directory. In that directory, the PDFs are presented to PDF Pen for OCR (using an applescript). Once done, it moves the file again to a waiting area. There I manually sort the documents – some need to go into business, work, or personal. It’s at this point, I tag and date each file, but I don’t rename it. Instead, I move it into another folder Hazel watches. It determines whether it knows what to do with the file based on the tags. If it does, it automatically renames the file based on its creation date and tags and moves it into my file archive where it needs to go.

rsync – open source command line app to back stuff up! I may choose another app later, but it’s good enough for now.

Final Words

While my approach is great for me, it certainly isn’t what’s best for everyone. Take a look at the “Paperless” book or some similar book (“Paperless” is cheap and comes with video!) and develop your own system. It might be worth it in the end.  Just remember, once you start, you must keep it up if it’s going to be useful.  If the system is too hard, it will fail.

 

Seriously? I’m watching Anime?

Since giving up cable tv and going with other solutions, like Netflix, I’ve been enjoying watching TV more, while watching less. I spend little time just watching what’s on and more time watching what interests me.

Of late, I’ve been experimenting with content from foreign sources, like Europe and Asia. In this mix of this, I’ve ended up watching 4 short Anime series with a max of 26 episodes.

Now, there’s a lot I don’t like about Anime (at least the Anime I’ve seen previously). One, I noticed there tends to be a lot of gore or horrible looking creatures – they seem to go hand-in-hand. I’m not a big fan of that. Two, they repeat themselves a lot. I remember watching a movie called “The Guyver” or something like that. It seemed like every sentence the phrase “The Gyver” would be there – sometimes more than once! Both of those are still true in these 4 series that I’ve recently watched.

However, these shows (mostly) were entertaining anyway. For the most part, I liked the characters and the overall story lines.

I’ve most recently completed the series “Claymore”, about a group of women warriors who were created to battle evil monsters (remember my two peeves? gore and monsters). Despite its problems, I enjoyed the main character “Clair”. I find that she had enough character growth to keep me interested in the series.

Best Student Council didn’t have blood and gore, but, like the title suggests it was centered in a girls school and focused mainly on a new student… and her puppet. Probably one of the strangest shows I’ve ever seen. The first episode of this show was apparently a freebee on iTunes at some point. I downloaded it but never watched. One day, trying to clear out my library of junk, I decided to watch this thing before dumping it. What a mistake THAT was! It hooked me with its odd humor and then I was stuck watching the series…

Last Exile was one of those stories that’s set in the far future with high tech but much of the people live poorly with low tech. It focuses on two characters and their van ship (a 2-person transport) and their roles as people who deliver messages. I found after you get through the first few episodes, the show was quite entertaining and interesting.

Another show I watched was Tokko. A similar situation as Claymore – women with big swords killing a bunch of monsters. While it has its moments, I didn’t like it that much.

So my experience with the these short shows is not unusual for me: enjoyed the experience but ultimately disappointed by then end. Endings for me are usually disappointing – it’s hard to write an ending that satisfies me, and if I enjoyed the show, I’m disappointed that there IS an end. So these short series are great – over relatively quickly, but I’m often sad that they’re over.

Papers2 v. Medeley v. Zotero

If you’re like me, you have a lot invested in managing sources. For many years, I was an fan and heavily used Endnote. Over the years, however, the high cost of maintaining a license for Endnote (typically $99 per year) and how I use the application meant that the costs were increasing given how much I actually used the software. Eventually, new players hit the market. Zotero (a Firefox plugin), Papers (mac only), Mendeley and many other smaller players in the market began to appear. I started off working with Zotero. I really like Zotero. However, I liked the PDF viewing and management much better in Papers. That set up a problem for me… Papers to manage PDFs, but since Paper’s citation system left much to be desired, I kept using Zotero for the citations. Unfortunately, I’ve had to admit that this approach was untenable. Worse, I don’t yet see a good solution to resolve the problem.

Here are the few things I want

1) Good word processor integration. I think all three have varying levels of quality, so for now I’ll say this is even…
2) Manages PDF files will with a clear directory structure and file naming convention – Papers win this one. This is important if I have a copy of this stuff somewhere and want to navigate the files by hand.
3) Web browsing/Google Scholar integration – Zotero wins this
4) Nice interface – Papers/Mendeley
5) Clearly open source database – Zotero… Papers and Mendeley probably also use an open source db, but it’s not as clear in terms of access.
6) Mobile interfaces – all 3
7) Offline mobile interface – Papers and Mendeley… Zotero, IMO, suffers from GNU licensing models which actually prevents it’s use on iOS platforms, at least that’s my interpretation
8) capacity for spacial data – Zotero… This one is extremely important to me. As a geologist, I want the ability to embed spatial information that I can later tease out and use to make maps. Most of these apps could be used for this, but I find Zotero’s interface much more adaptable.

So, given the above, none of these solutions fit me perfectly. Indeed, unifying to any one of these apps will cost me in terms of time and, if I choose Papers at least, money. My sad conclusion is that I think I’ll have to unify my solution to Zotero. For my mobile needs, I think I’ll probably start using exporting from Zotero and importing into Mendeley. Sadly, I think I’ll have to cut Papers out despite it’s power.

Mass Wasting Videos

Dalhousie, northern India 2011

Calabria, Italy, 16 Feb 2010

View Larger Map

Debris Flow: Clear Creek County, Colorado, Spring 2003

Liquefaction in New Zealand

Rock Slide in Tenn.

Japan 2004

Malaysia Tin Mine Disaster – slumping example

 


 

Oso Washington, 2014