Every year there is a new IT buzzword, the new thing that is supposed to change the way everything will be. And every year I have been disappointed at the gap between the message and the reality. Some would call this the marketecture vs reality gap.
And most of these great ideas had been around for decades before they became the latest fashion.
As an example do you remember HSM? Hierarchical Storage Management was the hot buzz word for years in storage,. People loved hearing about it and would spend months or years evaluating all the available options before finding the one that most closely met their needs, and then didn’t buy it.
We all remember eco, lean, blades, object orientation, virtual, web 2.0, ERP, Social, groupware, collaboration, open source, C. and so many more. They each had their day in the marketecture limelight, where they were promised as the cure to all ills, and they all exist today,. But none really lived up to the pleasentville-esque beautiful world of perfection they were presented as.
The latest over-hyped and generally misunderstood buzzword is “cloud”. Actually today pretty much everyone is using cloud technology. This blog is hosted on a virtual server that was automatically provisioned, and is really easy and cheap to setup and consume.
But here’s the thing, the cloud of today is still re-learning the lessons that were learnt by enterprise IT pioneers decades ago. When you look at a cloud, what you are really looking at is a mildly virtualized environment combining processing, memory, storage and networking.
Don’t misunderstand me, I am not saying clouds don’t exist or don’t work well, they do and they very much do. But there is a lot more they need to offer.
Let’s look at what a future cloud needs to offer.
1. The user should have no knowledge of the elements of the environment they are using. When you use the cloud, you are actually running a process, you should not care about the operating system or the operating systems resource needs. And when you run that process you should only care about the units of processing power it consumed. The idea that today you have to “provision” a server before you can run a process it just as artifact of the client server world, and gets in the way of consuming clouds.
2. Security in the cloud should be holistic. Every process, application, element of storage, file, record or field needs to be automatically parsed against a security database. Where each user is given explicit rights to each element or is given group rights that cover multiple elements. Access control and intrusion detection should be tightly bound together. No user should have excessive rights, and all accesses should be recorded.
3. All references should be universal. That is to say that all files and all lines of executable code, along with every field in every record in every database should have a universal resource locator just like a URL, so that it can be accessed wherever it sits in the cloud. This needs a very sophisticated and high performance routing system, but is critical to limitless clouds.
4. The hardware configuration of the processing elements of the cloud must tightly bind the CPU with the memory and allow for very fast swapping of code in and out. This can allow code to be spread across the available processing units and swapped in and out as needed without consuming resource when not required.
Imagine how this cloud would work. It would offer users virtually unlimited processing capacity, and allow very high system utilization, close to 100% , 100% of the time. While at the same time only charging a user for the actual processing they perform. And it would be incredibly secure.
Actually none of these ideas are new, they have been around for decades. The issue is that they are not used (well) today in the Wintel (windows / intel) and lintel (linux / intel) based environments that the cloud is being build upon. Anyone who has used a mainframe in the last 10 years knows all of this.
Ask a mainframer about hypervizors and then compare what they tell you to Microsoft or VMwares hypervisors.
I’m not saying that the mainframe got everything right. Actually I think that the mainframe got as much wrong as it did right. But there is an amazing amount of knowledge already in place that can be leveraged to make the next platform better. Ignoring this is a crime.
The issue is that Microsoft and Intel and all the other cloud leaders today have not used these technologies, and are relearning the lessons of the past, and remaking the same mistakes of the past.
The mainframe today has a ridiculous number of complex licensing models, simply because over time they all made sense to someone.
As the cloud is evolving, it needs to listen to the voices from the mainframe and avoid making these same mistakes.
But if you speak to the thought leaders of the cloud, they don’t see it this way. They think that looking backwards will slow their ability to innovate, and maybe they are right. But just maybe moving slightly slower, but building a better model is a worthwhile exercise.