Author Archives: specialk

Cheese Review: Tomme De Savoie, cow’s milk France

The Review fromwww.artisanalcheese.com:

Tomme de Savoie is a semi-hard cow’s milk cheese made in the valley of the Savoie in the French Alps. It has a delightful nutty flavor and smooth paste that melts in your mouth. The hard, powdery rind has an earthy aroma and is usually speckled with many types of indigenous and harmless molds.

For the casual cheese eater, this cheese was voted an A+. For the lover of complex cheeses this is somewhat less deserving of that grade, but a wonderful cheese nonetheless.

Cheese Review: Double Gloucester, cow’s milk, England

From www.artisanalcheese.com:

Double Gloucester is a traditional English cheddar-style cheese from Gloucestershire with a bold orange (I mean really orange) color and bright eggy, and somewhat sharp flavor.

The casual cheese fan’s had to be told not to eat all of the cheese in one sitting. So they give the cheese a grade of A+. The sophisticated cheese eater, however, found the cheese visually interesting but tasting a little bit like egg bread. So they give the cheese a grade of B.

Cheese Review: Gorgonzola Piccante, cow’s milk, Italy.

From www.artisanalcheese.com:

Gorgonzola Piccante, a formidable cow’s milk blue cheese from the region north of Milan, is Italy’s answer to Roquefort. It’s rough reddish rind protects a tender, light yellow, blue-flecked paste that is firm, moist and buttery. The flavor is sharp and sweet.

The casual cheese eater does not like blue cheese in general, and gives this a surprising C-. The sophisticated cheese eater is in love with this cheese. If nothing else it proves that the Italians can do whatever the French do, just better (see world cup). This cheese gets an A++.

Movie Review: Pan’s Labyrinth

A fantastic cross of For Whom the Bell Tolls

bell tolls cover

and The Chronicles of Narnia.

chronicles_of_narnia.jpg

Set in Spain in 1944, the film explores childhood, sacrifice, war, virtue, and civil strife. Unlike the other film I reviewed Flag of Our Fathers, this film has no axe to grind, no point to shove down my throat. Instead it looks at the horror of the world and tries to understand it a little bit better.

The film is about two parallel tales. The first is about a woman who marries a Captain in the Spanish army who is tasked with eliminating some Red revolutionaries who are hiding in the mountains. We follow her struggles, and the struggle between the Reds and the Spanish army. And like all such civil wars, there is blood, there is torture, there is heroism and there is perverted sense of duty on both sides. The second is about a little girl, the daughter of the mother from a first marriage, who is caught in a dream world of her own where she is a Princess of a magical fairy kingdom and the only way to return to her kingdom is to perform three tasks set by Pan. The two tales intersect repeatedly and ultimately tragically.

As an aside, I was watching the film, and thinking to myself how tragic the events of the film were. On the one hand, I am supposed to feel sorry for the Reds. On the other hand, I am tempted to thank God that butchers like the Captain existed to protect us from the Reds. In the end, I decided, that the misguided faith in the communist ideal destroyed what was best in both worlds. Both the Reds and the Fascists were idealists, visionaries, and patriots. And of the two, the Reds were the most misguided believing a myth that they hoped would somehow make the world a better place. All the Reds did was destroy their country, destroy themselves, and inspire the other side to extremes of violence. And if the last 50 years of history are any guide, of the two factions, it was for the best that the Reds lost. So not only do you feel sorry for their misguided ambitions, you feel relieved in knowing that they lost because there is no better world at the end of the rainbow.

This is a great film. Of the two films, The Departed, which I have also seen and Pan’s Labyrinth, I would have voted for Pan’s Labyrinth if Scorsese was not involved in The Departed. Perhaps the director of this film will be remembered as the better film that lost the year they decided to give Martin Scorsese the award he deserved for so many of his other films.

Eric Schmidt on the 20% time at Google

One of the enduring mysteries is out of what budget does Google fund the 20% time it’s engineers are supposed to be working on their own special projects. A simple plain text reading of the statement would suggest that Google is overstaffed by 20% or said differently: they have 20% more people than they need for their current projects. A negative spin on this would be that in a down turn they could lay off 20% of the company to meet expenses without impacting current deliverables. If this was true, I was even more envious of the Google business model than I already am. But I was mistaken.

In an interview in Wired, Eric Schmidt explained:

How do people actually do 20 percent time? How do people actually figure out a way to actually get 20 percent of their time for that without working on weekends?

They work on weekends.

Do you compensate them in a way that encourages them to come up with these projects?

Yeah, but remember the kind of people who we hire are not here for the compensation, they’re here for the impact. And there’s essentially an internal draft system, that helps redistribute talent which is complicated and quite clever.

Do you actually have to declare what your 20 percent project is going to be?

People are encouraged to do so as part of the snippets.

Okay. That’s the incentive.

But it’s encouraged, not required. Again, there’s things you measure and require and there’s things that you encourage. The 20 percent is a cultural thing.

So you’re encouraged to come up with an independent project, and if you’re an engineer it’s part of being able to sit at the lunch table with your peers and be respected?

That’s right. Your peers all have one, so what’s yours?

At last the mystery explained: it comes out of the personal budget of the engineers.

Fasinating.

Updated: June 16 2007, fixed some errors in the HTML encoding. Foolishly assumed that the thin client POC that I was using worked as well as MS Word did.

Movie Review: Flags of Our Fathers

The irony of the 21st century is that the we can look back on the one war this country agreed had to be fought (WWII) and be disgusted with how America fought it.

In Flags of Our Fathers Clint Eastwood chooses to explore the battle of Iwo Jima. The intent is to understand why people fight, how people fight and how we exploit images of that fighting for our own purposes.

The good part of the film is that it has a very accurate portrayal of the battle. As an amateur student of the second world war, the battle of Iwo Jima is both significant, apocalyptic and difficult to understand. Significant because the loss of life convinced the US military high command and its political leadership that any invasion of Japan would be devastating. Apocalyptic because the Japanese tenacious defense of the island was what opened the door to the nuclear bomb. Difficult to understand because the military value was suspect, and the nature of the battle seems to be reduced to platitudes of the form: hand to hand combat, etc.

While the film stayed focussed on the battle the film was gripping, and interesting if you find that kind of stuff gripping and interesting. There is the usual senseless mayhem and deaths. The bullets flying everywhere. The characters dying faster than you can remember their names. The reluctant and absurd heroism.

Until you see the rock that the Marines had to secure to defend the beaches, you can not really understand what the term “hand-to-hand” combat on Iwo Jima meant. Trapped on the beaches, their skin their only defense, they had to effectively eliminate a natural made pill box that had been entrenched over the past two years. If the film had just been a movie about the battle, it would have been tiresome movie to the general public but of great interest to those who cared about the battle.

Unfortunately the movie wanted to be a critique of the US political establishment. Clint Eastwood wanted us to understand how the Government, shock and horrors, in a time of war will do anything it can to exploit the masses to get them to support the war.

Really! Shame on them!
I mean, the government in a time of war will not engage in a nuanced debate over what needs to be done?

Shocking!

Clint Eastwood’s film is part of Hollywood’s polemic against the current war in Iraq. What makes this movie particularly tiresome and irritating is that to prove the venality of Bush and Cheney, he decides to demonstrate that even our greatest leaders were just as loathsome in their exploitation of the credulous public. By proving that Truman and FDR lied for political gain (raising money for a war against a government that was hell bent on controlling all East Asia) he attempts to reinforce his basic argument that:

See FDR and Truman were liars! So if those great men, snicker, were liars, will you know believe me that Bush and Cheney are lying to you?

And all I can say is:

Really?

Clint Eastwood’s point is that the first casualty of war is the truth! Thank goodness he told us! Because no one else has ever said it. I mean what was Samuel Johnson in 1758 trying to say:

Among the calamities of war may be jointly numbered the diminution of the love of truth, by the falsehoods which interest dictates and credulity encourages.’ (from The Idler, 1758)

Wait, wait I know:

The truth is the first casualty of war!

It’s a sad statement that Hollywood propaganda is so obvious and so insulting. Clint Eastwood insults my intelligence by making such an obvious statement and he ruins a great war film by making such an obvious statement. Clint Eastwood further insults my intelligence by equating the actions taken in WWII with the actions taken in Iraq. The wars were not the same. The circumstances were not the same. The stakes were not the same and the lies were no where near the same.
This film could have been a fun movie about the sacrifice and heroism of Iwo Jima. Instead it was a pulpit for a poor preacher to pass on what he thought to be revelation but in reality was common wisdom.

Spend your time and money elsewhere.

Where do NetApp’s hard technical problems come from?

In an earlier post I talked about the nature of NetApp’s hard problems, and I claimed that there were three factors:

  1. A basic technology that is incomplete
  2. A customer base willing to trade off features
  3. A customer base willing to pay for those features

In this post I’ll try and give some detail about 1 and 2.

For NetApp the basic technologies that have been driving our innovation, which is the fancy word for saying the set of hard problems that we’ve solved, has been and continues to be networks, storage media and commodity computing.

Back in the day when NetApp was founded the traditional computing system consisted of a CPU, RAM, some input and output devices and some form of stable storage. This form of computing is still how desktop PC’s and laptops are built. However, in the data center traditional computing systems have changed dramatically.

As an aside, data center is a terms that is used to describe the set of computers that are not used for personal computing but are a shared computing resource across a company or institution. Normally we associate the term data center with the enterprise, but really any company that has a shared computing resource (such as email or file serving or print serving) has a data center and this discussion applies to them as well as to the Fortune 500.

What caused that change was networking speeds, and commodity computing.

The traditional computer system made a lot of sense because of the ratios of performance between the components. Every normal application assumes that RAM has a fast uniform access and that storage has a predictable slow access. The performance of the application is a function of the speed of the CPU and the speed with which you can get data to and from RAM and to and from stable storage. Now it turns out that RAM and stable storage are much slower than CPU’s. Caching and clever algorithms are used to improve the performance of applications by trying to hide the latency of both RAM and stable storage. For storage, I’ll just state those algorithms were in the VM, file system, volume manager and RAID subsystem.

Now it turns out that the algorithms that were used to improve disk performance were executing on the same CPU that the application itself was running. Worse, the storage sub-system was competing with the application for the far more scarce resource of memory bandwidth. As the application demands for more CPU and memory bandwidth increased, the CPU cycles that were being consumed by the storage system were critically looked at and reasonable people started to ask whether the storage system really did require so many CPU cycles. In fact, some folks actually believed that the existence of general purpose storage sub-systems was the source of the performance problem. They therefore argued for eliminating all of those clever file systems and replacing them with custom per application solutions. The problem with that approach was that no one wanted to write their own file system, volume manager and RAID subsystem.

In software computer science every problem can be solved with a layer of indirection. In hardware computer science, every performance problem with a dedicated computing element.

The computer industry (and the founders of NetApp in particular) observed that there was a layer of indirection in UNIX between the storage sub-system and the rest of the computing system, and that was the VFS layer and NFS client. They also observed that because Ethernet network speed was increasing the storage subsystem could be moved onto it’s own dedicated computing element. In effect, the speed of the network was not an issue when it came to the predictability or slowness of the storage. Or more precisely by moving the storage sub-system out of the main computer they could use more computing and memory resources to compensate for any increased latency caused by the stable storage no longer being directly attached to the local shared bus. In fact in the 1990’s NetApp used to remark that our storage systems were faster than local disks. They further observed that the trends of commodity CPUs allowed them to build their dedicated computing element out of commodity parts this made it cost effective to build such a computing element. Designing your own ASIC is absurdly expensive.

Now it also turned out that putting the data on the network had some additional benefits beyond just performance. But it was those networking and CPU and disk drive technology trends that enabled the independence of storage subsystems.

It’s almost too obvious to point out, but you can not just attach an Ethernet cable to CPU a to disk drive and have networked storage. In fact, the challenge we have at NetApp is how to combine those components into a product that adds value. In effect the source of all of our hard problems is how to write clever algorithms that exploit the attributes and hide the limitations of disks to add value to applications that require stable storage within a particular cost structure (CPU and RAM and Disk).

Which gets me to item 2, trade-offs, of my list. If you have an infinite budget, you could construct a stable storage system that had enough memory to cache your entire dataset in battery backed RAM. You could imagine that periodically some of the data would be flushed to disk. Such a storage subsystem would be fairly simple to construct but would be ridiculously expensive. In effect, insufficient customers would pay for it.

In effect, customers want a certain amount of performance that fits into their budget. The trick is how to deliver that performance. And the performance it turns out is not just about how fast you perform read and write operations, but in fact encompasses all of the tasks you need to perform with stable storage. And this where things get messy.

Performance for a storage sub-system is of course about how fast you can get at the data, but also how fast you can back up the data and how fast you can restore your data and how fast you can replicate the data to a remote site in case of a disaster. And it turns out that for many customers those other factors are important enough that they are willing to trade off some performance for read and write if they can get faster backups, restores and replication. And it further turns out that for many customers the performance of an operation is also a function of the ease to perform said operation. For example, if a restore takes 3 minutes to perform, but requires 8 hours to setup before you can hit the restore command, customer understand that the performance is really 8 hours and 3 minutes.

So really performance is a function of raw read and write, speed of backup, restore and replication and ease of use.

It turns out that if you optimize for any one of those vectors exclusively you will fail in the market place. To succeed you have to trade-off time and energy for one in favor of the other.

So where do the hard problems come from at NetApp?

  1. Building high performance storage subsystem that is reliable. This, in many ways is a canonical file system, volume manager and RAID level problem however because we are dedicate storage sub-system we have other specific challenges.
  2. Building efficient mechanisms for replication and backup and restore that unless you are careful can affect 1. This is a unique area and is relatively new in the storage industry. Although replication has existed for a while, understanding how backup and DR should be optimally done is a not yet fully understood.
  3. Building a simple storage system. For NetApp a key value proposition is that the total cost of ownership of our devices is lower than our competitors. It turns out that simplicity is a challenge not only for one storage subsystem but also when you have several hundred but I’ll talk about that in a later post.

So now I’ve hopefully explained where our hard problems comes from. In my next posts I’ll discuss each of these sub-bullets in more detail.

On the nature of our hard problems …

In my post about why you should work at NetApp, I described the four fundamental reasons as

  1. Work on something important
  2. Work on hard problems
  3. Work with intelligent people
  4. Have your contribution matter

I explained in an earlier post why what I do is important to our customers.

So now let me tackle the question of hard problems. In this post, I’ll limit myself to defining the general nature of what a hard problem for a company like NetApp is. In later posts, I’ll get into more specifics about the kinds of hard problems we work on.
The first thing to is define a hard problem. A traditional definition of a hard problem is:

A problem is fundamentally hard if no solution at any cost is known to exist, and previous attempts at solving the problem resulted in failure. A problem may be impossible if no solution exists but we will assume that for the purpose of this post, a problem is hard if and only if a solution exists. This class of problems is typically the area of basic research.

At a company like Network Appliance, we do not typically explore problems in this space although we have in specific areas over the past 15 years. Basic Research is just not our focus. If you are interested in working on these kinds of problems, my recommendation is get a Ph.D. in Computer Science and then find an academic or research lab position.

The nature of hard problems that NetApp engineering works on fit into the following bucket:

There exists some basic technology that offers some compelling features to a user but does not completely satisfy the requirements of the user. The user is willing to pay for the basic technology. The user is willing to trade-off some features for other features.

To understand how this applies to NetApp I need to explain a whole bunch of things. The first is the nature of the basic technologies that we rely on and how they influence us. The second is why the user wants to use that technology. The third is to explain how the basic technology can not meet the requirements of the user. The fourth that there is an opportunity to build interesting products that can satisfy the requirements of the consumer.

Once I’ve explained those four things, I can explain in more detail where are our hard problems lie.

But I’ll leave that for another post….

Updated with some cleaned up grammar.

Lamb II

One of the more irritating aspects of my life is that I go through these long protracted periods of painful activity where things that I care about whither. One of those things is my blog.
Like last year, my wife and I decided to cook a whole lamb. Unlike last year, where half the fun was in trying to find the right components, this year was all about execution.

Here you can see the lamb:

purchased from Draeger’s.

You can see farmer kostadis bring the lamb into the house with his trusty dog Tony:

My wife helped me tie the lamb up (okay she did most of the tieing!):

With the lamb trussed, we could begin the process of cooking the lamb. Now it turns out that machinery exists to dramatically reduce the manual labour involved (approximately 5 hours of rotating) but manual labour does have it’s advantages:

Something tells me that the delightful nibble plate you see on the table would not have been made by my wife if we were using a motor.

This year’s innovation was to use a small plank of wood in addition to the 6 foot dowel. The plank reduced the strain on our wrists. Check out my friend Marcin rotating the lamb:

Marcin helped the most in this effort, showing up at 12:00 noon. But there were more. Some of the people I managed to take pictures of.
Jay Moorthi:

And later David Grunwald

Brian Quirion

And my youngest assistant was Lincoln Mendenhall:

When the cooking was done, we could begin the carving:

Of course, this is a Greek Easter, so having lamb is considered only part of the meal. We had an incredible spread of other food (that this time we took pictures of):

My wife and I look exhausted, and little did we know that we were only half way done at that point …

My old and dear friend Sanford Barr showed up:

All in all it was a great party with approximately 42 sentient (if you count dogs) there. The only tragic victim was my stuffed dog who had wine spilled all over him. Here we see him suffering over the bathtub:

And here we see the dastardly villain who spilled the wine trying to clean him:

For more pictures check out: http://kostadis.smugmug.com/gallery/2849696#152773237