The Internet Of Objects – Ideal Or A Path To The End Of Everything

In the 1980’s and into the 1990’s there was a movement in technology towards objects. The idea was than any and all data, applications, devices etc. could be broken down into a series of discrete pieces of information, and the use of this information could be described in a consistent way. This would allow everything to work together harmoniously without complex pre-work to describe what everything was.

The issue (at that time) was that for most types of data the meta-data to describe it was actually much larger than the data itself, and this was a huge problem when networks were slower than the spoken word and data storage was more expensive than postage. So the idea slowly died and morphed, and we have been left with a really messy series of standards which make sharing data and devices complex and expensive.

Now I know that I am paraphrasing the whole issue here, but there is no doubt that where we are, is not where we want to be in terms of integrated systems.

Imagine if every piece of data was wrapped in a consistent set of metadata (data about the data).

Imagine if you were sent an email with a specific type of data attached to it, that the data would self-describe its value, keep a record of who created it, what application was needed to use it, and even where the code to use it resided.

Imagine if every internet connected device could provide details on its use, location and current state when asked. So when you enter a house and you could automatically be part of that houses network. Your environmental preferences would automatically be shared with the house, and your entertainment preferences would be available on each device in the house. Obviously assuming that you had the approval of the houses prioritized users.

Imagine that when you program your phones map app to take you to a specific place, your diary and the diaries of everyone you are meeting that day are automatically updated with travel times and arrival times. And the systems in the place you are going to are updated with your drink and food preferences and a desk is reserved for you automatically for when you arrive or the meeting room you are planning to use is automatically chosen based on the number of people who are meeting.

Imagine if in an emergency all the connected devices in a building on fire could be viewed by those trying to help. Every temperature sensor and video feed was automatically available to them, and any phone picked up would automatically connect to the on-site emergency teams without any buttons needing to be pressed. All water, gas and power would be selectively turned off or on by the emergency teams as needed.

Imagine if the sensors in every car, street light and road sign were shared amongst themselves, providing a mesh of knowledge available to every road user, and that journeys were planned with the knowledge about the current conditions, dynamically updated with the planned journeys of every other road user.

Imagine if a doctor was able to review the health data of a patient collected by the patients watch, phone, home and pharmacist building a profile of the patient’s history to help diagnose from subtle changes in their physical condition important early diagnosis of problems allowing for much better treatments.

If every piece of data and every internet connected device could describe itself in a consistent and meaningful way, the possibilities are endless.

There are of course risks associated with easier communication, risks that actually may be greater than the benefits.

It’s almost an evolutionary level risk.

Within a species a continual flow of random mutations creates the likelihood that some variants will survive in any type of changing environment or to put it another way diversity is good.

If all information systems were to follow a single standard, then the possibility would exist of total destruction of the entire system. We have already seen that computer viruses designed to attack windows systems can impact millions of systems at the same time. Smug mac users have always felt safer, but that safety only comes from the simple fact that they are a separate sub-species. It is very hard for an infection to spread across species (biological or technical), but in a world where all data and devices were unified behind one standard, that standard itself could become a risk.

The value of total interconnectivity is immense, but the implications of everything being compromised would be too terrible to consider.

Is it possible to create an interconnected would that is secure enough to be viable?

That is the cold war not just of this century but probably for the whole future of humanity.

(9)

The only way is to “keep it moving”

Economics always seems very complex, and there is a reason for that. Like all mathematically impossible situations the only way to explain them is to make them seem too complex to explain.

Through all of human history the idea of exchange goods for services has required all the systems we see today, including, laws, education systems, policing, security etc.

Until a few decades ago the idea was that you would exchange services for things of equal value, such as a lump of gold or silvers or a bushel of corn. And to save carrying huge bags of gold around, notes were passed between people saying that they promised to provide the actual gold in exchange for a service. These notes of promise were called money. And to be able to make money you used to have to have a store of precious metals equal to the amount of money you printed.

And then someone has the really smart idea of getting rid of the need to actually have the store of gold to be able to print money. This allowed as much money to be made as was needed, to grow the economy. The only issue is that if anyone ever wanted to swap his or her money for gold the whole system would collapse.

So to make sure a collapse doesn’t happen there was an implicit need to make sure all the money keeps moving around. It’s like musical chairs where there are less chairs than people, so as long as everyone is moving there is no issue, but if the music was ever to stop there wouldn’t be enough places for all the money to sit, then the value of money goes down (inflation).

To make sure the money keeps moving governments used to tax money that was sitting still. This has two important effects:
1. Taxing money means it keeps moving by itself (moving to the government who could spend it on services)
2. Made it preferential for people to keep their money moving to keep it out of the hands of the taxman.

Today we have a huge issue in that the amount of tax applied to money that is not doing anything is too low to encourage people to keep it moving. This means that too much money is just sitting in banks making money without doing anything, and this is really bad, as sitting money can be seen for what it is, and that is a promise that can never be kept.

It’s okay to be rich; actually it’s the best thing to be. But the value of money will only get less if it does nothing of real value. Capitalism is without a doubt the best system the human race has ever had to build the quality and length of human life. But capitalism demands that capital be used to build stuff, it fails if it sits around doing nothing.

Tax lazy money to keep the capitalist system healthy. Either it moves to the government to spend, keeping it moving. Or it forces people to use their money to build things. Either way is much better than notes of promises that cannot be kept sitting in places where its lack of true value can be seen.

(764)

Who Writes the Epitaph?

Consider Gore, Al.  For a guy who was never president, he’s incredibly well known for a great many things.  Which one will rise to the top and be his final 1-line epitaph?  Some possibilities include:

– 1/2 of the fiery young Clinton-Gore presidential team for 8 years who drove the “Reinventing Government” initiative to cut waste and red tape in Washington, DC

– Inventing the Internet (and making us capitalize “Internet”)

– “Inventing” the Global Warming issue (or the GW myth if you’re skeptical), and winning the Nobel Prize for it

– Losing his home state of Tenessee (with 11 Electoral votes) in a presidential election he lost by 5 Electoral votes

Candidate Gore’s famous on-stage kiss

– The icky, creepy on-stage, on-air erotic kisser of Tipper “Parental Advisory record labels” Gore

– Hanging chads and the most controversial election result in generations

– Co-founder of “Current TV” network (with Joel Hyatt)

…Or will Mr. Gore just be best remembered for being a hilarious “head-in-a-jar” (a preachy, boring one at that) on the animated TV show Futurama?

So, will Albert Arnold Gore, Jr. best be remembered for something “positive” or something “negative?”

There’s a saying from an Australian philanthropist, lifesaver and pubbuilder known as Sheepshagger John which may help you predict the answer, “You know, a man can do a thousand great things, but if you “shag” one lousy sheep…”– (5) (7)

(937)

Why 3D just isn’t good enough

Eyes are amazing devices. When you look around your eyes focus on what you are staring at directly, and everything else you see becomes softer in focus. But as soon as you move your focus the new thing you are looking directly at comes into focus.

Continually your eyes are moving changing the place you are focusing on. Everything in your field of vision is available to focus on.

But when you go to see a 3D movie. The director (through the lens of cameras) has chosen what you should focus on, keeping everything else in soft focus. This is okay for a flat image, because it shows you where to look on the screen. We have become used to it (to some extent) and see it as artful direction.

But when we go to see a huge screen (such as an IMAX movie) in three dimensions, this just does not work.

When you are looking at a massive screen in 3D and only a small proportion of it is in focus, it is just annoying. The issue is that for a 3D movie to be totally immersive, you need to be able to see everything in sharp focus, not just the center of the director’s intent.

When you are watching a forest in a movie (such as Avatar) you want to see all the vines in sharp focus. This gives you the impression of being totally immersed in the movie. But the director chooses to keep just the section when the camera is focused in sharp focus and the rest in soft focus. This just ruins 3D.

Now when a scene is computer generated (CGI) there is no reason to create a depth of field. And when they choose to keep everything in focus, it works so much better. All of a sudden you are totally immersed in a 3D world. That’s how 3D should be!

The only 3D movies worth seeing on a big screen are computer generated. Cameras with lenses need to be focused on a specific spot. That technology just doesn’t do it for 3D.

As filmmakers learn to shake of the shackles of 20th century movie making and adopt a pure system for 3D making, the genre will have a future. If they continue to hold onto depth of field photography along with low frame rates to add creative blur to movies then the life of the cinema may be coming to an end.

If when you go to the movies to watch a 3D movie, and you find the effect makes you a little queasy then that is because of the low frame rate, and forced soft focus. If the whole image is created using CGI and is at a faster frame rate, you would feel you were part of a 3D world on screen, and not on a rollercoaster of blur and enforced head shaking movement.

I suspect the move to create move immersive 3D movies has a couple of groups fighting against it.

Firstly there will be those who are firmly convinced that “traditional” rules are film making must be enforced, in the same way that those who though movies were better without sound fought for their art (and the jury is still out on that one of course).

And secondly if a movie is made to be an immersive 3D experience (as I’ve described) the costs of production would be much higher and the resulting film wouldn’t look as good without a massive 3D screen, reducing the revenue possibilities from smaller theaters and home viewing.

So the chance of seeing really great 3D is limited my tradition and greed.

Two huge factors which are (if history is any guide) likely to win.

Here’s hoping for fully immersive 3D
(38)

(470)

Businesses: Shoot Thyself in Foot Much?

Real face-to-face training is becoming a fairy tale.

Companies no longer really train employees.  It saves them money to cut or minimize training and new skills programs.   And with employees no longer having to “waste” time in training, they can do more of their regular work (as well as cover some of the work that used to be done by their boss/co-worker/employee that no longer works at the company (and who may or may not ever be replaced)).  As expected, companies still make it look good and tout their commitment to the “learning/knowledge growth thing” and their extensive library of online classes employees can access (on their own time of course).

What about when you do need to hire?  There’s got to be plenty of unemployed candidates who’ve had the exact job role, right?   Those candidates will be thrilled just to get back to working and doing the exact job they had last year, right?  The recession solves the problem and keeps a lid on frivolous expenses like training new hires or cross-training employees to enable them to take on different roles in the company in the future.  Win-win!  It is win-win, right?

What’s interesting is that senior management’s cunning plan of using the recession combined with this hiring strategy is simply not working.  A recent survey confirms that employers are frustrated with the lack of skills of candidates they’re reviewing and that it is taking much longer than anticipated to fill job openings because of it.  Firms that don’t want to fill in the skills gaps of otherwise great candidates with some post-hire training essentially are annoyed that candidates’ previous employers didn’t train them well enough (think about that one for a moment… and then consider if your firm fits that ironic description).  So short-sighted, it’s almost unbelievable.  Good to Great?  Mmmmm, maybe not.

To these many, many firms I say, “Shoot thyself in foot much?”  Oh, and advice for employees and investors of these firms… short the stock. (186)

(833)