Category Archives: technology

Another tool

The other day, I was reflecting how M$ had created this incredible tool that cost money for doing web blogging called MS Word.

So I figured that blogging tool market would be decimated by MS Word, except I forgot the “costs money” part.

Now I discover that they have a free version of their blogging tool called Windows Live Writer.

What’s interesting is how they have optimized this tool around blogging while preserving the essential utility of Word.

As I explore more, I’ll tell you more.

iPhone: The greatest cell phone browser ever?

Just the other day I was futzing with the Nokia N95 web browser and discovered that it had several features that were similar to the iPhone. For example the N95 has a mini-map:

s60_mini_map.gif
and the time-travel feature:
s60_time_travel.gif

I was intrigued. I checked out the relative Apple and Nokia corporate web sites and discovered that the touted features were in fact identical. So I set out to discover the source of the similarity.

Now it turns out that both Nokia and Apple in 2005 had agreed to partner on building a web browser based on Safari:

Nokia has announced that it using open source software in developing a new mobile Web browser for its Series 60 SmartPhone — and that this has been developed in cooperation with Apple.

The Series 60 browser will use the same open source components, WebCore and JavaScriptCore, that Apple uses in Safari that is based on KHTML and KJS from KDE’s “Konqueror” open source project.

Nokia said that it intends to continue its collaboration with Apple — and will actively participate in the open source community to further develop and enhance these components, contributing Nokia’s “expertise in mobility,” the company said.

And in fact Nokia’s open source project page describes exactly how the Safari web browser is the basis for their browser.

Mystery at last revealed, the reason the browsers are so similar is because they are the same browser.

I will observe that this further confirms my near universal irritation with the quality of technology journalism. The fact that the Apple hype machine implied the iPhone was unique in it’s use of Safari did not mean a few moments of fact checking would not have revealed that Apple was using someone’s else technology.

The cell phone camera

Originally, I was very sceptical of the value of the cell phone camera. I wondered why anyone would use a crappy digital camera embeded in their phone when perfectly good cheap digital cameras existed.

I concluded that there were two reasons:

  1. Convenience. You always carry a cell phone and don’t always carry a camera
  2. Cost. A camera costs 200$. A cell phone with a camera may cost 0$. For someone who is cost concious, the vast majority of the human race, this is a significant chunk of change.

I was so stupid. 

I’ve been recently playing with my cell phone’s (Cingular 8525) digital camera. And I discovered, that there were two really, really good reasons to use a cell phone camera:

  1. Simple upload to photo sharing site. I finish taking the picture, and then quickly upload the picture without having to painfully download it to my laptop, and then upload. This makes it very easy for me to share my pictures.
  2. Ease of tagging and labeling. I don’t remember where I took a picture or why. But my cell phone has a keyboard! So once I am done taking a picture, I quickly label the picture. This way I do not have an endless mysterious collection of pictures.
  3. Geo-encoding. Although my cell phone does not have a built-in GPS, my wife’s N95 does. After seeing how her pictures can be geo-encoded (and her routes) I more or less decided that the next camera will either have a built-in GPS receiver or connect to a blue-tooth enabled GPS receiver.

If you combine all these four reasons then the only reason you should not use a cell phone camera is the quality of the lens and sensors. But that’s where the Nokia N95 and Nokia N93i show that the future is the cell phone camera not the standalone snapshot camera. The N95 actually has a 5MP camera with a fansastic lens, a working flash but no zoom. The Nokia N93i is a great camera with a 3x zoom but has a less than ideal form factor. Both of these cell phones overcome the only real reason you would not use a cell phone, that the quality of the pictures is sub-par.

I suspect that the standalone snapshot digital camera will eventually disappear to be replaced by the cell phone camera.

Defining the computer science boondoggle

One of the persistent worries I have is whether a particular technology I am in love with is really a boondoggle. In other words, the technology solves a hard problem in an interesting way that is useful because of some abnormal discontinuity computer system performance, but in reality is of transitory interest so spending much time and energy on the problem is of limited value.

Having said that, I thought to myself, well what is a computer science boondoggle exactly?

So I came up with the following pre-requisites of a boondoggle:

  1. There must be a free compute resource.
  2. An existing application can only take advantage of the free compute resource if modified
  3. The performance gain of the modified application is significant

The field goes on boondoggle when someone determines that the barrier to using the compute resource is the application modification and so tries to come up with ways to transparently take advantage of said resource. In effect, the allure of this free compute resource, and the challenge of making it usable drags people down a rabbit hole of trying to make it easy to just transparently use this new compute resource!

The boondoggle ends when the compute resources that require no application modification eventually catch up eliminating the need to modify applications or use special libraries to get superior performance.

As an example, consider Software Distributed Shared Memory (SDSM). SDSM began life when people observed that in any engineering office there were these idle workstations that could be connected together to create a cheap supercomputer. People then observed, that to take advantage of said resource applications would have to be modified. So some applications were modified and performance gains were real. And it could have all ended there.
The SDSM boondoggle took hold, when some bright folks realized that modifying every application was going to take too much time. So they decided to invent this rather elegant way to allow applications to run, unmodified, on these networks of workstations. Of course, the problem was that applications assumed uniform memory access and SDSM provided non-uniform memory access, whose non-uniformity was measured in the 10s of milliseconds. Because of the latency issues and the non-uniform memory access, these unmodified applications performed poorly. It could have all ended right there, but the allure of this transparent way to take advantage of the performance of these idle workstations was so tantalizing that folks spent a significant amount of energy trying to make it work.

They failed.

What did happen was that computers got bigger and faster and cheaper making the performance gains less and less interesting given the amount of effort required to make them work using SDSM.

So SDSM was trying to exploit a discontinuity in computing resources (the relative costs of a collection of workstations versus a supercomputer), was trying to do it in an interesting way, but in reality was not of long term value because of the hardware compute trends that were in place at the time.

Explaining what we do at NetApp

So Mike’s right. At the end of the day, you have at least the following four basic motivations when you pick your first job:

  1. Work on something important
  2. Work on hard problems
  3. Work with intelligent people
  4. Have your contribution matter

We assume that you are making enough money, the job is in a field you are interested in, the cultural fit is real etc.

So why work at NetApp? Because at the end of the day we work on important hard problems. The individual contributions do in fact matter. And you get to work with very intelligent people.

But what do we do?

Let me tell you a story. My wife takes lots of pictures. Our old Dell was dying and the Buffalo Link station we were storing our photos on was making ugly whirring sounds. Since I work at a storage company I am too familiar with how disks can fail. (I sometimes worry that I am like one of those people who watch too much House and think that they have contracted cancer every time they have an ache or pain). I, therefore, decided since I was tasked with the miserable job of buying a new computer that her new computer would have some form of RAID. I bought my computer from Dell because I had a reasonable amount of success with them over the years. The machine was configured with two disks that were mirrored. Now it turns out that Dell also sold (gave) us a copy of Norton Ghost, a disk-to-disk backup utility.

When the machine arrived, the RAID-1 disks were partitioned into two partitions, an active file system, and a backup partition. The active partition was 170GB and the backup partition was 50GB. I was confused, because typically you need more backup space than you need primary, but I figured that there must be some rhyme or reason. Maybe Norton Ghost was clever enough to only copy the “My Documents” folder. Maybe it did compression. Maybe it did something really cool.

Well it turns out that Norton Ghost just does a full partition copy from one partition to another. And it turns out that the minute the used space in her primary partition exceed 50GB her backup software stopped working.

After looking into the problem for several hours, it became clear that the partition scheme Dell invented was useless. Most of the time investigating the problem was spent trying to find some reasonable rationale for their division of disk space. There was none. Finally, I decided that RAID-1 was probably good enough to protect her data from hardware failures, and a USB hard drive would be good enough for backups until I bought a StoreVault.

What was a simple problem for my wife: give me some reliable storage and make it easy for me to backup my data, turned into a complex problem of finding the right technology and configuring the software and hardware appropriately.

So now imagine if I had to resolve the same problem on 10 machines, or 100 machines. What took several hours might take several days. And before you know it I am being swamped trying to figure out how to setup backup for each individual user.

So what does NetApp do? NetApp sells reliable storage, that performs well. What differentiates our storage from our competitors is our simplicity. Many of the time consuming painful tasks that people normally have to perform with other people’s reliable storage, are just simpler using NetApp. The magic in our simplicity is not just in a pretty GUI. A lot of the magic is in our core platform. Although some of our user interfaces are pretty slick.

I’ll try in another post to explain why what we do is hard.

Fixed some grammar.

The difference between Computer Scientists and Humans

Recently a bunch of computer scientists were arguing over the correct name for an entity that non techies would use. They proposed something foograph. This was quickly discarded, but it once again demonstrated the chasm between the computer scientist and the rest of the world.
Rest of humanity:

A graph is a plot of data

Computer Scientists:

A graph G(V(G), E(G)) consists of a set vertices, denoted by V(G) and a set of Edges denoted by E(G) such that each edge contains two distinct vertices in V(G).

For proof look at this screen shot:

Computer Scients vs Humans

Google pages….

sarcasm/

I’ve become one of the blessed. I too can now create web pages using Google’s Web 2.0 Web Page Generator! And yes the product is the coolest, spiffiest, most innovative piece of technology out there! It breaks new exciting ground!

/sarcasm

I don’t actually intend to do much with that page other than to experiment with the technology. The Google folks sometimes have interesting UI ideas.

Google pages, however, at first blush is a disappointmnet.

Google Maps redefined what we expected from web based map products.

Google Pages is just another web page editor that is integrated with web publishing software. My blog software is as sophisticated and easy to use as the Google product.

We’ll see …

Google Adwords

One of the reasons I set up my own blog was to explore the capabilities etc of Google Adwords.

What’s interesting is how piss poor it really is.

The ad’s key off of random phrases in my blog that have absolutely nothing to do with the actual content of my blog.

I do movie reviews, some coffee reviews and some occasional random topic. Because I do movie reviews, my blog tends to have key words that are all over the place.

You would think that Google’s brain trust would key off the categories and have things related to movies, but no. I have ads for Mormon’s, Islam and my all-time favorite a lesbian Christian.

I was thinking Google’s AdWords would generate revenue.

They don’t.

But they do generate humor.