Tuesday, November 3, 2009

One Person

Abstract: You are one person, using multiple devices - why shouldn't your stuff follow YOU rather than the device? Since we don't like depending on a 3rd party, the "home" cloud is the answer?


--------- Issue

Most programs today are isolated islands of functionality. The Web 2.0 transforms that, allowing you to mesh together pieces of functionality and information. The same is not true on personal computing platforms i.e. your laptop.

The bookmarks you store on your laptop can't be accessed when you're on the desktop at home and vice-versa. Same for your notes and pretty much anything, unless you manually copy them etc.

Option 1: store information on the web (centralised location). The one benefit of this is the ability to use that information from anywhere and any device. The drawbacks include big-brother and relying on a 3rd party for availability (including backup etc). Of course, the need to be online...although smart solutions can use local caches.

The big brother issue is getting worse. As much as you trust ALL levels of government of ALL countries that may have a claim on you or your property AND all their agencies, there are more and more corporations that mine all kinds of data about your person, for different reasons. Some as benign as offering you a new credit card and some as bad as credit recovery, setting insurance premiums etc. Let's not forget identity theft.

Option 2: use a home server. Basically, almost everyone has by now a server at home, which is constantly connected to the big Cloud, whether you're downloading movies or host your own blog. Why not use that instead of a 3rd party, to serve other stuff.

There are many a solutions to host your own stuff and access/share from/with the world. There's lots of software to serve security camera feeds, TV programs from your home, the Windows Home Server etc...the trend is already in place.

Having resolved (somewhat) the big brother and 3rd party dependency problem, there's still the issue of needing to be online.

Option 3: fully distributed and synchronised personal cloud. Store information locally, on the device that creates it (like on the work desktop where you save a favourite) and synchronise all devices (peer-to-peer or properly distributed solution). The only drawback is lack of on-demand availability of the information from other devices, until the sync occurs. This can be solved however by connecting your own PC to the net...or combining with option 1 or rather 2.


------------- Vision

The vision is that of you and your information following you. In fact it's not following you...it's just "there" for you to use. Whatever computer in your "home" cloud you use, your information is there.
I mean, will you upload ALL your favourites on facebook, just so you have them available in the living room, when you get home? I don't think so!

The solution is the logical conclusion of the same debate of distributed vs. centralised that's been raging for the past few decades. All those "in the know" know that it's a wave function. Now we may be heading towards the climax of centralised internet based clouds, but, as more PERSONAL processing power becomes more online, split between more types of personal devices and connected, the trend in the other way is just a matter of time...wave, right?

So, I dream of many agents, running on many personal devices, connected in your "personal" or "home" cloud, sharing all kinds of information and cooperating.

So, when you mark a "favourite" on your work desktop, it automatically gets replicated on your laptop, your home desktop (the cloud's gateway) and all the computers in the house, including the one in the living-room. There's nothing left to do but, when you get home, sit down and enjoy it on the big screen!

No 3rd party knows you enjoy that hardcore woodworking show or that you're trying to fix the toilet seat, no insurance company can mine that you once watched 3 illegal races on youtube etc...all usually available on your facebook or whathaveyou.

-------------- The social aspect

Today, no application is complete without considering the social aspect. You could obviously push some of your favourites to facebook, or share with friends in "friend" clouds.

-------------- Flexibility, customisation

Probably the biggest gain from a "central" cloud, like gmail etc is flexibility and customisability. Gmail is gmail is gmail. That's exactly what it does and, while you may change it's fonts and even access it remotely via an IMAP or http API (thank you, Google) that's still exactly what it does.

A personal cloud, though, can be customised. You decide what it does and how it does it.

-------------- try it

If you want to play with such an agent, try a preview of mine, at http://razpub.googlecode.com/downloads

Read more about it at http://wiki.homecloud.ca

You can even try the remote favourites sharing prototype...follow this link after you downloaded, installed it and started it on several computers: http://localhost:4444/mutant/capture.html - After capturing, go to another computer in your home cloud and see the links at: http://localhost:4444/mutant/asset/Link

This is written as of version 0.x so the links may change - the use case however will be maintained up-to-date in the wiki, at http://wiki.homecloud.ca/savedlink

Friday, October 2, 2009

Modern Applications

ABSTRACT: today it is unacceptable to have any application built as an independent islands of functionality. It is necessary to expose functionality via simple protocols (http).


Really, it is unacceptable today to have any application built as an independent piece of functionality. No application is or should be complex enough to do EVERYTHING. Thus, since we do not view our users as the mandatory human servers of the omnipotent computer -- with the noble lifetime goal of clicking through the menus, dialogs, wizards and buttons that the all-mighty programmer has imposed on them -- all application components NEED TO OFFER API-based access to a basic set of services/functions/objects/whathaveyou.

These span a large number of functional areas, from controlling the application remotely, scripting its behavior, input/output/logging etc, so we'll take them one-by-one in future posts.

Point being that applications become interoperable and subject to automation. There's no other way really to automatically turn down the volume of the DVD player because you answered a call on skype! OR have a tweet on your cellphone when the torrent has finished donwloading the latest CBC show.


--- Interoperability == standardisation?

This thinking is normal to any Unix hacker, since they allways did "find|grep|cut|wc|sort" but it seems too complicated for anybody else to comprehend, especially window-heads.

With physical devices, standardisation is, granted, not easy. Standards arise from a need which is already fulfilled by existing devices, cables and plugs. Then the plugs get standardised and devices become interoperable.

The same was true in the software world (find|grep), but it's been false for a long time, especially since the advent of the stupidifying mouse and the un-parseable pixel.

Command lines have always been around and http has also been around for a long time. There really is no reason not to combine both, when all you want is users to use the functionality your code has to offer.

Ideally, all applications would be certified as "open" and "interoperable" and we will get there, in time. DLNA/UPNP is an example of such an interoperable framework, including certification. So is OSS/J (a set of telecom APIs) and others.


--- using http at rest to describe and interact with the object-oriented world

So, what should ideally happen? We'd have unified protocols for access and interoperability. HTTP, telnet, command lines come to mind. Formats are also an obvious need: HTML/XML/JSON/text. Let REST dictate how things work, but only a modicum of human intervention can be mandated.

Beyond the needs, there's the wants. We'd like that each would expose their internal models (objects, services, methods, actions, functionality) in a standard way, parseable and understandable by others.

If you extend that to the interoperable web, you get the semantic web. Well, almost, since that's been designed by DBAs - they'd like to call it the "data web" and hide logic in a view that offers more data :). Sorry, couldn't help it!

I live in a world surrounded by objects I'm interacting with. Data/information has a very important role to play, but it's not the end-all. My newspaper can not only filter for me the latest developments on that accident, but can also manage my account and micro-payments. Having knowledge of an intermediary PayPal only serves to confuse me and keep my mind busy with concepts I don't care about, since i already told my "gate keeper" that I trust my newspaper somewhat...

The semantic web guys have it right, though. Exposing standardized schemas of the applications's objects and functionalities is the way to go. Scoping these (my "train" is different than your "train") makes obvious sense (think namespaces).


---- So, vat do you vant?

In the B2B universe (now turned SOA) there's lots of WS-based standards, including security, identity exchange etc. These don't bode well on the web, though. Why write things twice? It has to be simple-over-http. PERIOD, dumby.

So, each modern application has:
- embedded http server
- exposing functionality/internal models etc
- designed to be interoperable

I would like everyone to stop buying and using any application that doesn't meet these criteria. Harsh, but then i'm in a bad mood today...

------ So, what's a developer to do?

Embed an http server. In case you need a small, lightweight embedded web server, checkout my public project at http://razpub.googlecode.com . There's lots of others out there - actually I recommend you get one that supports the servlet standard and you do servlets.

Think about and define your business model, spell out your domain entities. Use less services and more objects when defining your API and bode nicely with REST.

Define your model in whatever format you want - just make sure it's an xml file :). We'll deal with these in a future post, but for now it must be objcet-oriented: each 'class' has 'attributes' and 'methods'. The methods have a name and a list of arguments. Keep all types to String for now...assume all interaction is via URLs, which can't marshal bytecode - that's something we'll have to deal with.

Offer access to the entire business model via http, including content (values) and control (invoked methods).

Document all this very nicely in an embedded set of html pages. No smarts neccessary. Simple solutions always work better than complex ones.

If you're looking for an asset/modelling/http object interface framework, checkout the com.razie.pub.assets package of my razpub project.

Stick to the REST principles for now, until my "REST is bad" post is posted.

Modularity - allow extensions of functionality. Since the future is OSGi, may I suggest an OSGi compliant server - mine will be but it's not.

And generally, don't forget to have fun!

P.S. Just to give a concrete example, the VLC player is a great one. It has several interfaces, including telnet and http and you start whichever you want...

Thursday, June 4, 2009

What's up with the home cloud?

ABSTRACT: An abstract description of what I'm up to here.

I am generally frustrated with the state of software - and have been, for a long time. People love to reinvent the wheel all the time, make life miserable for the users that don't stick to base use cases etc.

I have some suggestions on how applications should be written and how the (software) world should look like. This is of course just one of the directions in which we could evolve but it's the one that seems to me to make the most sense.

There are quite a few components to this picture so you'll have to bear with me over the course of several separate blogs on related topics and, at the end, we'll put everything together.

There will be three different levels of this new "architecture", bottoms up:

1. The basic idea is that there are certain things that are necessary to include in ANY software component of any kind, for instance exposing an API to access and manage the internal objects, data and functionality.

2. A generic distributed application support framework will then use these to generate the "home cloud" - a cloud of interacting devices and objects under your personal, direct and total control.

3. From here, moving to the web, you can have clusters of clouds of different functionalities, completely removing the necessity for centralized software.

4. On top of these, there are different functionalities and specific use cases, such as "using identities", synchronizing files etc. I am not happy with the way these are handled by existing software, so you'll get a chance to hear my opinion.

It will take me a while to put each into words and sample code. I'm not just blogging here, I'm also putting together a framework to test all these concepts, hopefully good enough that it will be used by others. You can get a glimpse and keep tabs on that here http://razpub.googlecode.com

I will try to blog at least once a month, so a somewhat complete picture will form this year, but don't hold either your breath or me to it.

Intended audience: mixed. The blogs are individually searchable, so a personal review of the "Windows Live Sync" for instance will benefit anyone searching for that kind of stuff. I will try to categorize each blog, to benefit the followers.

I tend to be terse, so be ready to fire up your brain and read between the words. Non software-development professionals may have a hard time with my terseness and techie wor(d) games...they'll figure it out fast and stop reading.

I think that's all for now - I'll end this here and start working on the next month's blog.

Have fun!