Wednesday, December 10, 2008

Turning Book Publishing Inside Out

A printing press in Kabul, Afghanistan.Image via WikipediaContinuing the series of collaboration and coordination platforms (C&C platforms) I wanted to look at a more concrete example of how cc platforms can change an industry and how they work to turn existing businesses inside out. My example will look at book publishing.

Book publishing has a range of steps to produce an end result (a book in the shops that is purchased by a consumer). Let’s look at the steps:

  1. Author a book
  2. Find a publisher willing to publish your book
  3. Edit the book
  4. Make changes to your book
  5. Create Cover art
  6. Print and bind book
  7. Market Book
  8. Ship book to stores
  9. Distribute revenue

Now I realise it isn’t necessarily as smooth as the list makes out and some of the items happen in parallel, but for purposes of the example it works. Steps 3 to 9 are currently the realm of the publisher. That arose as a publisher was the only one who could organise the resources needed to complete steps 3 to 9 at reasonable cost.

But with Collaboration and Coordination platforms this is no longer true. C&C platforms reduce the costs of coordinating the various activities that you don’t need a vertically integrated publisher to achieve a sold book.

For an author to publish their book, the C&C platform would provide them with access to a market of editors, designers to create cover art, print-on-demand services, book marketing services and payment features. The C&C platform doesn’t provide all the services rather it organises them and reduces the transaction costs of using services from multiple providers.

Say I am an author of a book. I upload my manuscript to the C&C platform. I then begin the workflow of publishing the book with the system creating an alert to editors that a new manuscript is ready for editing. The editor that I select then makes the edits and uploads the changes to the manuscript; I review and accept or push-back on until we arrive at something both are happy with. I then need to create the cover art and the system again creates an alert for designers. I select a designer and they get access to the manuscript so they can create relevant cover art. Once the cover art is agreed, the art is uploaded to the system and I progress to the next stage.

I now have a book ready to go, so I need to pass the manuscript and cover art along to a POD provider. This I do with a simple click on a button loading the book into the system of the POD provider I have selected. This also puts advertising and listing of the book into key retailers and I can begin the marketing of the book. It may be that I need support for the marketing which I can source through the platform as well.

The C&C platform guides the user through the steps necessary to complete a task (publishing a book), handles the necessary communications, makes sure everyone is paid and manages the media. It coordinates the various markets for each service so that together they can achieve a task. C&C platforms reduce the transaction costs to the point that a Firm in the Coaseian sense is not the most effective manner in achieving a task.

Reblog this post [with Zemanta]

Monday, December 08, 2008

Answering API requests or not (QOOP)

Pages in a bookImage via WikipediaOk in this day and age, it shouldn't, nay, mustn't take 2 business days (and counting) to respond to a request for an API key. Unfortunately, that is what QOOP is doing.

I fail to understand why they do not have an automated system for generating API keys so developers can get started integrating QOOP into their sites. Even if the key is restricted in usage, the verfication of commercial intent can happend later. But it is vital that any company offering an API make it dead simple to get started. Doing otherwise is simply the creation of a needless barrier to entry and will drive potential partners to competitors that don't have that barrier.

Honestly, this is simple stuff. You can't rely on human vetting in an environment like the internet. People do not scale, automation does.

Technorati Tag: QOOP

Reblog this post [with Zemanta]

Reinventing the Wheel of Finance

National Bank of the Republic, Salt Lake City 1908Image via WikipediaOne of the ideas batted around every so often during the financial crisis is whether it would simply be easier and cheaper to create new banks and leave the old ones to die. On and off I’ve been thinking what this would mean. Which lead to the question of “are the current financial institutions the best way to manage capital?”

Banks have three primary roles: 1) store money 2) transfer money and 3) lending dormant money. Storing money is to the consumer the main view of banks. They store your money for you until you need it to pay bills and purchase stuff. Transfer money is what allows you to get your money from storage to where you needed it e.g. an ATM or to pay a bill online. Banks aggregate deposits and then lend this out of businesses and individuals to buy assets. It is this process that keeps money circulating and supports wealth creation.

There is no reason why these various roles need to be contained in one institution. We could in the future have one type of institution that stores money, another focused on transferring money and another on lending. What would happen if we separated these roles? For a majority of day-to-day banking tasks not much would outwardly change. The institution that stores your money would outwardly look like an existing bank with branches, debit cards and bank accounts. The transfer role is already there to an extent through various interbank agreements. Broadly, you won’t notice much.

Separate roles with be very noticeable in lending. Banks aggregate deposits and lend this out to individuals and businesses to buy assets (property, machinery etc.) Banks keep a percentage of deposits for people to use to pay bills and buy stuff. This works with the assumption that all the depositors are not going to need their money all at the same time. This process gives banks access to very cheap money which they can then lend out.

By separating the roles banks wouldn’t do the aggregation and lending. Instead those that lend would need to develop various ways to aggregate deposits from companies that store money in order to fund lending.
It is fair to ask whether this system would be more effective for society. I don’t know but it is worth exploring the idea. The advantages that I can see are:

  • Reduce conflict of interest between storing money and lending
  • Competition among deposit aggregators will produce a sustained increase the interest paid to depositors
  • It opens up the financial utility system for horizontal innovation and competition leading to better delivery of financial services
The existing banks and financial institutions will cry bloody murder as it attacks the status quo and potentially their profitability. However, they really haven’t demonstrated keen management of one of society’s key utilities. A potential more important issue is that individuals will be personally confronted with the idea that not all of their money is in their account. The existing illusion of your bank account actually having all your money ready to go will be stripped away as individuals have to decide how much of their account to lend to the aggregators. The removal of the illusion is very likely to create problems as people adjust to reality.

Separating the roles isn’t something that needs to be forced on existing players. Instead the various regulatory bodies should reform regulations to allow companies to step up and fulfil each role. Unleashing innovation in the delivery of financial utility (rather than in financial instruments) to society will do much to resolve the financial crisis.

Reblog this post [with Zemanta]

Wednesday, December 03, 2008

Transparency and ending the Financial Crisis

Mike Masnick has interesting post on the requirement for transparency in order to do trade. The basic premise is that lack of transparency therefore information is at the root of the crisis as people simply don't know what the value of anything is any more.

This is along the lines of my own thoughts. The subprime crisis was the trigger where people realised they had no idea what everything was worth and it spiralled from there.

As information or lack thereof is at the heart of this crisis, I believe that the continual injections of cash and purchase of assets is only prolonging the issue. The system is relatively stable now so the key is radical transparency. The banks, financial institutions, hedge funds etc. need to open their books to 3rd parties (trusted 3rd parties) in order for the information to be found.

Once that is done, investors will regain confidence in their ability to value companies and assets. At the moment they can't and so won't risk their money.

This will probably require Government legislation to force the opening but it has to happen. The more public the information is made the fast this whole crisis will be solved and then everyone can move onto fixing the damange being done to the real economy (you know the part that creates real wealth and improved living standards?).

Reblog this post [with Zemanta]

Tuesday, November 18, 2008

How Can Australia Weather the Financial Storm?

Solar water heater on a rooftop in Jerusalem, ...Image via WikipediaAustralia, like the rest of the world, is coming under some battering from the financial crisis. It is worth looking at whether we can take advantage of the situation to make drastic improvements to the overall economy. Let’s begin by looking at where things stand.

Simple version is that the world is heading for recession and demand for Australian goods and services will go down. Various commentators and the Government have talked up China’s growth as keeping Australia from slipping into recession. I don’t think this is going to work. Wall Street Journal has reported a 4% drop in electricity consumption in China in October. Production of goods and services will fall due to the weakening demand in US and Europe, China’s key export markets which will mean China has less need for mineral commodities which has been the engine of Australia for the last few years.

The other idea floating around is that China and India’s middle class will continue to consume. Two problems with that, 1) Chinese and India’s middle class while big is not on the same scale in terms of consumption to replace the demand of the West and 2) much of the middle class in India and China are dependent on jobs that supply goods and services to Western consumers so as demand in the West falls expect to see the number of Chinese and Indian middle class fall. A lot of people don’t understand how the world’s growth has been due to the debt-fuelled consumption of the West. There is nothing to replace that demand.

As to weakening demand in the West a few simple yard sticks indicate that the de-leveraging still has away to go. House prices are still 3 times the average household income and household debt is still much greater than disposable income. In fact we still have yet to feel the effect of unsecured lending and credit card debt.
Unfortunately for Australia, the Government’s hasty and badly contrive deposit guarantee introduce instability into an otherwise stable financial system.

I expect that Australia will fall into recession and while the recession may not last very long the economy will remain flat until the system has de-leveraged itself back to more realistic levels. Golden years are over for now.

But...but this crisis is the single greatest opportunity Australia has to make a rapid shift, a quantum leap if you will, to a low carbon sustainable economy. Taking advantage of the crisis requires short, medium and long term initiatives by the Government, which meld together to shift people into green jobs, reduce per-capita energy consumption and prepare the ground for a low carbon economy by investing in key infrastructure.
Short term, Australia’s already had the fiscal stimulus (by short term I mean right now). That will hopefully keep things from falling off the cliff. Problem with giving money is a lot will go into paying down debt and less into consumption. An additional short term measure is to expand the solar hot water and solar power programs for household. This program already has the management systems in place and has been “closed” due to wild adoption. Simply by adding several hundred million dollars to the program, the number of installations can be increased creating jobs and reducing carbon creation.

Medium term (six months to a year) there are the tax cuts in July 2009. That won’t be enough and although the Government has talked about a 2nd stimulus package, that same money would be better spent focusing on the following initiatives:

  • Paying for low income and pension households to double glaze and insulate the homes, install solar hot water and high-efficiency air-conditioning;
  • Provide up-front income contingent loans to other households to do the same;
  • Fund the installation of solar panels and solar hot water on all government buildings and schools across Australia;
  • Soft loans to small business and commercial real-estate owners to install insulation, double glaze windows, install solar hot water and solar panels and purchase new high-efficiency machinery;
  • Revamp the car bail-out to allow people to trade to the government low millage cars for hybrid and electric cars;
  • Begin construction of a national conduit system across towns and cities starting in regional towns. The single biggest cost of laying fibre broadband is having to dig up the road. By creating a conduit system that anyone can get right of access to will allow new competitors to provide super fast (100+ Mbps symmetrical) to households. Additionally, conduits are standard so the contracts can be tendered to a wide range of construction companies creating short term jobs that are localised.
These initiatives will create green jobs, reduce the per-capita energy carbon energy consumption of Australia while also freeing up household income as utility bills are reduced. Again this income will initially go to paying down loans but then in about one year the income will move into demand replacing the spending of the Government initiatives.

Long term initiatives are everyone’s favourite infrastructure projects. The projects I see as having the best long term pay off are:
  • Fund a High Speed Rail on the East Coast initially connecting Sydney, Canberra and Melbourne with an option to Brisbane using TGV standard;
  • Fund the development of light rail in the major capital cities and regional centres;
  • Fund the development of DC High Voltage transmission lines that connect the centre of regional Australia to the major cities and regional towns.
All three of these initiatives would create a range of jobs over several years. They would also reduce the carbon footprint of the economy. The DC high voltage lines are important as they make large-scale solar more viable by providing an effective, low loss method of getting solar generated power to the cities. HSR can be started quickly by utilising the existing TGV standard and designing to support 500 km/h speeds.

The financial crisis offers the single greatest opportunity to cross the tipping point to a low carbon sustainable economy. A bit of fore thought and planning the Government (or Opposition) can make a huge difference to the country.
Reblog this post [with Zemanta]

Thursday, November 13, 2008

In support of James’ Cloud

Clouds rear to crashImage by Simon Cast via FlickrJames Governor recently did a re-run of his 15 Ways to Tell Its Not Cloud Computing with a post addressing some of the critical reaction he has got in a post 15 Ways I Am Wrong About Enterprise Cloud Computing. I broadly support James’ original thesis and I’ll explain why.

Definitions of Cloud Computing abound and some are really wordy and well confusing. When I think of Cloud Computing I keep it to the following definition:

“Cloud Computing abstracts the where and how of computing to allow users to focus on the what”


By this I mean that developers no longer need to worry about the details of how computing is delivered or where the computing is located (i.e. what server) instead they can focus on making sure their application achieves what they want and is reliable.

So a Cloud is more than simply a grid or utility computing as it also needs to support software stacks without the developer worrying about how it is done. A full on cloud negates many of the low level management requirements and simply provides computing and storage resource that is on-demand and easy to use, like booting an OS and running an application.

Now what we will have is internal and external clouds to an enterprise. Think Internet versus Intranet. The reason for deploying an internal cloud is to reduce the hardware capital inefficiency most enterprises face along with providing internal developers access to the benefits of Cloud Computing will still meeting the desire for security and control of data and applications.

Companies like Sun, IBM and HP will rollout “cloud-in-a-box” that allow Enterprises to replace existing hardware with Clouds. Will the enterprise own each server that makes up the “cloud-in-a-box”? Probably not. Instead they will own the “cloud” with a maintenance contract that sees the vendor swap out the hardware regularly to keep the computing capability of the cloud growing.

The reason for enterprises to deploy internal clouds is simple – it increases capital efficiency of IT while allowing developers and system administrators to focus on the application and less on keeping a mass of hardware and low-level software running and up-to-date. External clouds will work for many businesses that don’t need massive internal applications. It isn’t a really an either/or proposition.

James notes at the end of his reply they he is half-right and half wrong. I agree with the caveat that I think he is more than half-right and less than half-wrong. Until enterprises can tick off his 15 points they will not be taking full advantage of the potential of Cloud Computing.

Reblog this post [with Zemanta]

Monday, November 10, 2008

Optimisation of Workflow and Collaboration Platforms

View of Vale of Blekeley from Uley BuryImage by Simon Cast via FlickrOptimisation of workflow is the aim of the game. We want to reach the goal with as little expenditure of resources as possible. Unfortunately, much of the optimisation game has been played at the level of the individual action, which usually results in a destabilised system. This post in the series on collaboration and workflow will look at how workflow should be optimised and the role that collaboration platforms can play.

Reasonable question to start with is why optimise? Optimising or improving workflow increases the throughput. More simply, optimising workflow means more gets down with fewer resources. For business it means they can focus on producing the most value without wasting resources.

Optimisation of workflow is not about making a single action overly productive but instead about balancing the various actions in a workflow to produce the best overall throughput. It is about making the system robust rather than optimising for a particular scenario. “The Goal” by Goldratt provides a useful case study on optimising workflow.

Actions within a workflow can be re-arranged, removed, melded together and improved. The key is the modification of actions within the workflow all needs to focus on improving the overall workflow throughput. This may mean that while an individual action’s throughput can be increased from say 80 to 90%, there is no point in doing this if it does not increase the overall throughput of the workflow.
So where do collaboration platforms come in? Collaboration platforms have two functions (1) they serve as a framework within which to improve workflow and (2) they offer a way of improving individual actions.
The improvement of individual actions is a tried and tested use of collaboration frameworks. Think parallel editing of a client document and the management of tasks for a project. The improvement of individual actions is a well developed use of collaboration platforms but this only works so far as it does not optimise the action in context of the wider throughput of the workflow in question.

The framework aspect of collaboration platforms is very under developed and in terms of overall impact on business this is where changes will have the most dramatic impact on a workflow. The collaboration framework allows users to optimise and control actions within the context of the overall workflow.
The idea of a framework is to allow users to build a workflow from individual actions, examine how this workflow works and selectively change (add, remove, optimise) actions all with the aim to improve the overall workflow. It is like using plant control software to change various flow rates and values in order to change the amount of a chemical produced.

To illustrate what I mean let us look at the example of putting on an event. An event requires the coordination and completion of a series of actions such as booking and managing the venue, managing attendance and paying various entities. Each of the activities would be arranged as required into a workflow with the collaboration platform ensuring the smooth handoff between activities. None of the activities need to be powered by the platform rather they are coordinated and controlled using the collaboration platform in order to achieve the goal of the workflow. Using real world companies the venue would be booked and managed through BookingBug, RegOnline handles the registration and attendance management, Moo.com produces the tickets and ID, PayPal is used to manage payments to various entities and Huddle coordinates all these activities and manages the communication and information between the event organisers. The event organiser can focus on creating a compelling event that runs smoothly.

It is the coordination and control of actions that produces the dramatic improvement in workflow and consequently the value created by the business. As workflow now and increasingly extends across multiple organisations coordination is key to ensuring that workflow is as effective as possible. At the same time, as collaboration platforms improve the coordination of workflow then increasingly workflow will become made up of various groups working on the actions to which they add the greatest value (think of the example above). It is a positive feedback back cycle – improved collaboration increases the value of various groups working together which in turn drives improvement in collaboration and so on.

We are seeing the rise of workflow specific platforms such as Amiando and RegOnline in the case of event management. I suspect this trend will continue but there is a lack of flexibility to this approach. The real revolution will happen as the current crop of collaboration platforms along with new entrants evolve towards workflow coordination platforms that support the plug-in and specialist modules. Most groups require more than a single workflow to operate and I expect single workflow services such as Amiando and RegOnline will work within generalised coordination and collaboration platforms.

Reblog this post [with Zemanta]

Understanding Workflow

In the first post, Collaboration Reformation, of this series on collaboration and workflow, I made the statement about the transformation of collaboration from resources to activities. It is worthwhile looking at activities, or more accurately workflow, specifically before moving on to further discussion on collaboration and workflow.

Workflow is essentially a series of discrete actions that when group together produce a desired outcome. The obvious example is a manufacturing assembly line. A series of actions such as screwing on a door and adding an engine are arranged together in a line in order to assemble a car. Manufacturing assembly lines are obvious but don’t become hooked on the assembly line example. Workflow is simply a series of discrete actions performed together to achieve a goal. The goal can easily be the development and roll out of new features for a web service as it is the assembly of a car.

Goals can range from a product or service (say a car or a massage) to something more intangible (say increasing support for a candidate). For most of the post I will focus on the product and service goals but it applies equally to intangibles as well.

Workflow can be broken down into three categories: operational, development and overhead. Operational is the workflow that delivers the goal. Development is workflow that is necessary to create, improve or fix a goal. Overhead is workflow necessary to keep the organisation and group going in order that it can achieve the operational and development workflow. There is overlap between the three categories of workflow and you can represent it as a Venn diagram.

The workflows necessary to complete a goal are unlikely to be fully contained within a single group. A car maker doesn’t make the bolts or wires that go into a car. Toyota recognised this which is why The Toyota Production System works to coordinate workflow in suppliers not only within a Toyota plant.

Each workflow is made up of lots of different actions. Optimising an action without consideration for the workflow de-stabilises the whole workflow and produces counter-productive results. It is no good having one action of a workflow produce more than can be processed by downstream actions. In “The Goal” Goldratt provides a series of good examples of what happens when optimising a single action versus optimisation across the workflow.

In the next of this series, we’ll look at how optimisation can be achieved and look in more detail at how collaboration platforms are a part of this optimisation.

Reblog this post [with Zemanta]

Sunday, November 09, 2008

Could the Guardian Media Group be the New York Time’s white knight?

I was talking to Seamus McCauley last night about newspaper problems with specific reference to New York Times problem. Today Silicon Alley Insider has an interesting post looking at New York Times financial problems.

One of the points made last night was the shift to global news brands that are less tied to specific media channel. The BBC and the Guardian Media Group are both representative of this trend. New York Times faces a tough choice about how to pull itself out of the hole that it finds itself in.

One possible remedy is sale of New York Times but there are few other media companies around with the general strength to save the New York Times. Guardian Media Group (GMG) is one of the few. A purchase of New York Times by GMG would fit with the expressed desire of GMG to expand its US presence and the New York Times would provide GMG with a strong US news brand with the attendant benefits.

New York Times would benefit from the GMG’s unusual corporate structure and becoming part of a news group pursuing an interesting and innovative multi-platform strategy for news.

Reblog this post [with Zemanta]

Keywords from Questions

In a recent article Google Search Quality Tech Lead Daniel Russell talks about an example of a user using keywords to find ferry timetable. What struck me as interesting was how the user didn’t hit upon using the keyword “ferry” until further in through their search task.

I suspect this was caused by the user starting with a question with words to the effect of “When does the ferry leave San Francisco to Larkspur?” and then attempting to turn this into a series of keywords by knocking out works such as does, when etc. The word “ferry” got knocked out of the user’s first run of keywords as it was a generic reference to ferry. In this case ferry was thought of as a common noun rather a proper noun.

If my hypothesis is reasonable then quality of keywords is going to depend on how the user first structures the question in their mind. For example if the user had used the following structure for the question “When does the San Francisco to Larkspur Ferry leave?” the word “ferry” would have been used as a keyword.

The potential importance to the way a user structures their initial question mentally points to a severe limitation to keyword and ranking search paradigm. The speed and quality of the search experience is heavily dependent on the user structuring the initial question so as to readily identify effective keywords, something that the search engines can do little to effect.

On the other hand question and fact paradigm based search engines, such as True Knowledge, will not suffer this problem.

Reblog this post [with Zemanta]

Friday, November 07, 2008

Micro-startups in a Collaboration and Coordinating world

SV200709Image by Simon Cast via FlickrIn Jason Calacanis’s recent email missive, he explores where the value for start-ups lie. Two points “The Age of the Micro-startup” and “The Try Everything Era” touches on the long on-going battle between features versus products. Put succinctly, many start-ups are little more than a feature (albeit useful) and in and of itself not a sustainable product.

Jason’s theme is that the high capital efficiency of today’s web will allow features to blossom and expand on existing services. I remain sceptical of feature companies’ (micro-startup in Jason’s terminology) as a standalone going concern but I see the value that these micro-startups can create in a collaboration and coordination world.

Collaboration and coordination platforms will enable micro-startup’s to create value by increasing the overall value of the platform by adding functionality. The advantage for the micro-startups is that the value of their feature is increased by being coordinated with various other features allowing users to achieve a goal. The value of the whole is more than the sum of its parts. Further they get access to a framework that simplifies coordination and business operations (e.g. getting revenue).

For collaboration and coordination platforms also benefit from the multiplicative effective as well as allowing features to be added to the platform cheaply and in response to demand. While they could build a lot of the features, the development resources needed that would limit what and how quickly new features can be rolled out. Micro-startups form a eco-system that is self organising about what to build and the resources to devote to it.

This differs from existing platforms, Facebook and OpenSocial, in that these platforms are about building micro-apps that have little to no coordination with other applications on the platform. The applications are standalone. These platforms essentially act as a hosting service with access to a social graph.
Coordination and Collaboration platforms are about coordinating actions (features) in order to achieve something. Achieving a goal is going to produce greater value in the long run and produce more value producing companies than simply tapping a social graph.

Reblog this post [with Zemanta]

Wednesday, November 05, 2008

Resources versus Answers – Asking a Question of Search

{{fr}} La tour Eiffel vue depuis le Champ-de-Mars.Image via WikipediaSearch is very broad in meaning and it is easy to lose sight that search actually consists of two distinct sub-sets of queries. Both sub-sets aim to find something; one is looking for resource and another for an answer. At this time we use the same approach – keywords matched in a document that is ranked for relevancy via some method (human and/or algorithm) – across both of these sub-sets of queries. This works somewhat but we are rapidly approaching the limit of effectiveness for this approach. This limit is Marissa Mayers 80/20 problem of search.

The first sub-set is finding resources (e.g. documents). The current keyword and ranking method works well for this type of query. This is what has fuelled Google’s growth. Keyword and ranking when a user is looking for one or more resources on a topic such as blog posts talking about an election. Where it falls down is answering specific queries such as “How old is the Eiffel Tower?” The user in this case is looking for a fact. Users have gotten around this problem by using the returned resources from a search as the basis to find the answer they are looking for, a human adaptation to a systemic problem.

Finding answers is the second sub-set. While we currently rely on keywords and ranking to navigate to an answer it is cumbersome and not effective. Instead the paradigm of keywords and ranking needs to be tossed out. Finding answers works better with a question and fact. A question (as opposed to queries) allows the system to quantify what fact is being asked about. For example the question “How old is the Eiffel Tower?” focuses the particular answer to be found to the age instead of potentially the location, who built it, what it is made of etc.

Using the question and fact paradigm to find answers creates new approaches to using web services and usefulness of the web to everyday life. This isn’t to say that question and fact will replace keyword and ranking rather it is complimentary and produces better results for a sub-set of search.

Consider the example of finding flights for a holiday. Using keyword and ranking the user would type in something along the lines of “flights cheap [destination]”. The engine would then return a series of web sites that match those keywords. The user then navigates to those pages and then drills through the pages to find the answer to their question. If, however, question and fact is used the user would type in “What is the cheapest flight to [destination] leaving on the 21st of December?” The web then returns the fact that flight y priced at x leaving at 10 am on the 21st is the cheapest flight. How much quicker and easier is that to understand?

For many people the web and search are still too difficult to use. But they know how to ask a question and this opens up the utility of search and the web to a whole range of users that are intimidated by it. It is worth repeating that question and fact will not replace keyword and ranking. There are queries with which question and fact doesn’t work for just as there are queries for which keyword and ranking doesn’t work for. They are complimentary.

Question and fact does have the potential to boost the growth of paid search results. The boost arises from question and fact providing a better signal of the user’s intention and so improves the targeting of advertising that better answers the query. For example a user asks the question “what is the cheapest holiday for a 16 year old girl in Mexico?” it a very reasonable to assume the intent is to find a holiday for a 16 year old girl in Mexico. A keyword and ranking would produce results about holiday’s in Mexico without any knowledge of whom or why he is searching although an assumption could be made that the person is looking for themselves. Interestingly, through in demographics and/or behavioural data and the system will produce completely the wrong answer. Say for example the person is 52 year old male in which case the system is likely to return Mexico holidays for a 52 year old man when his intention was to find a holiday for his 16 year old daughter.

Question and fact will go a long way to addressing the 20% of search remaining. Many web services implement crude methods for asking a question, ones that are frankly laborious and time consuming to use. The key to unlocking the power of question and fact is to make it as easy as possible to ask the questions. The pitfall to implementing question and fact is knowing when to use it. Question and fact works when the question can be answered by a fact e.g. “How old is the Eiffel Tower?” It doesn’t work when the answer is not a fact e.g. “What is the best holiday in Mexico?”

Reblog this post [with Zemanta]

Friday, October 31, 2008

The Collaborative Reformation

Collaboration has become the buzz word of the times. It holds out the promise of making all work productive all of the time, an attractive carrot in times of economic crisis. While collaboration tools will improve the effectiveness of existing processes and businesses its true impact and most dramatic effect lies in how collaboration tools can bring about a reformation of business.

Improving the efficiency of existing process works up to a point and indeed most, if not all, collaboration tools are predicated on somehow improving the efficiency of existing processes. The real promise of collaboration tools is how they can help users reform the fundamental processes of businesses. I’m not simply talking about getting rid of layers of approval but the complete overhaul of how new work is brought in, how it is created, how it is charged and how it is produced.

The impact comes from allowing business and users to focus on the work that adds value and streamline and eliminate the non-value add work. Elimination may involve out-sourcing to another business where the particular process or work is their value-add. Think designing cover art for a new book. It is not a value add proposition for a publisher but is a value-add proposition for a designer.

The reformation extends beyond simply managing documents and information to completing the core tasks of the business whether it is a plumber, development agency or a manufacturer. Collaboration services need to be given access to the physical world that plays such an important part in many businesses whether it is tasking plumbers and ordering plumbing supplies for delivery or controlling a CNC machine.

In effect collaboration services need to evolve into a framework within which business operates; a framework which supports agile business processes, modules for specialist features (think CAM control) and management of information within the business. This is the path for development of collaboration services such as Huddle.

The ultimate goal is supporting the ideal of the networked business. A “Business” is a network of smaller businesses using a common collaboration platform with specialist modules from various providers. Business becomes a network of networks in which the collaboration service coordinates activities. It is this that is the root of the dramatic and sustainable change that collaboration services can bring to business.

Reblog this post [with Zemanta]

Tuesday, October 21, 2008

Can Hubdub Survive?

I’ve been playing with Hubdub recently and all I can say is it has a looong way to go. In fact I think that unless some major changes are made Hubdub won’t survive 2009. Unfortunately, I think Hubdub faces some major, major hurdles, that will make the struggle to build a decent revenue stream (remember now cash is king...angel funding will only get them so far).

Hubdub for those that have not come across it is as predication market based around news. The idea is to combine news aggregation of some sort with predictive markets.

I’ve put down bets and created questions. My consequent experience has been less than heart warming. But to illustrate my concerns let’s look at the questions I created.

My first question was “Will Digg buy Hubdub in 2009?” Speculative yes, but Hubdub is a predictive market – questions are by nature speculative. Background to the question: Kevin Rose discussed Digg’s international expansion plans in his talk at FOWA London. This talk was widely reported in the media. Some other facts:

  • Digg has just closed a funding round of $29m in September
  • Hubdub has only raised angel funding and has 4 employees
  • In this economic climate cash is king. Getting revenue positive is the holy grail
  • Hubdub has not articulated a source of revenue that is sustaining
From these data points (which are all easily available with a quick search on the web) one concludes that Hubdub is a good target for acquisition. It has a decent (although it requires some work) prediction platform but other than that it has nothing special. Kevin Rose wants to expand internationally and the prediction market technology would work well with the Digg platform. It would certainly give Digg greater number of potential revenue streams. I will be the first to admit that this is speculative but it is based on facts. But all it took was one person to raise a question and the question was voided.

Second question was “How far will UK house prices fall by December 2008?” This question was voided as it didn’t have an option for housing prices fall being below 15%. Let’s look at the logic of this. As of September 2008 both Halifax and Nationwide have reported house price falls of 12.4%. There are three more months to go before the end of December and these things don’t turn around on a dime. Being below 15% is not an option as credit is still tight and the UK has entered recession. There is no sound possibility that house prices will not fall less 15%. Now let’s look at this from angle of question creator. I may not want to provide an option so why should I? Why do questions have to be modified to provide gamblers with an option they want?

The issues with the questions are merely symptoms of what I see as major flaws of the Hubdub system. They are:
  1. The rules for creating questions are vague and easily open to interpretation.
  2. The very act of creating questions is daunting and annoying
  3. The site is rapidly becoming dominated by power users
Hubdub is seeing the play out of Clay Shirky’s maxim “A Group Is Its Own Worst Enemy”. Power users and early adopters will band together to void questions and in other ways hassle newbies merely because they don’t like the question (rather than it’s predictive quality) or because newbies are falling afoul of capricious unwritten rules. There is no penalty against power users for this type of behaviour. This is over and above the effort needed to create questions in the first place. It is extremely disconcerting to put the effort into questions only to have them void on relatively spurious grounds.

The single most worrying aspect about the flaws in Hubdub is everyone has been talking about the development of communities and their interaction for the last 11 years. Let’s recap – Usenet went through the same problem, Slashdot went through the same problem, Digg went through the same problem. See a pattern here?

Clay Shirky has been shouting from the roof tops about it for years. Hugh McLeod and Tara Hunt have all discussed it. At what point do people pay attention? Angel investing or not, for a service that is based on community and users, to not have the necessary tools needed to manage the community’s interaction with the platform is simply, well, scary. It speaks to a company that has a fundamental lack of understand of community base services.

Is it important? Very. Hubdub needs a diverse, large and vibrant community not only of speculators but also question creators. The way Hubdub is going to make money is from selling premium access to data and audience to businesses. However, companies will only pay if the Hubdub community is diverse and large as then the data and audience has value to them. As it is the current community is doing very well at driving new members away. Hardly a good method of growth.

Hubdub can possibly turn this around. The first step is to build the tools and features necessary to manage the community interaction. This will piss off the power users and early adopters as it blunts their power. That is the price to pay for improving the experience and engagement for a broader and more diverse people. Actions must have consequence. When actions have no consequence poor behaviour soon dominates. Slashdot found this out the hard way as has Digg.

The other part of the engagement issue is question creation. Relying on people to read FAQs about question creation is very, well, RTFM. Most people don’t RTFM and nor should they. If there are rules about question creation they need to be clear and objective with no room for abuse to void questions that someone doesn’t like. Of course, if the rules are clear and objective the system should not allow questions to be created in the first place that don’t meet those rules. People should not have to RTFM.

In fact, I wonder why voiding is necessary at all – isn’t the very act of betting on a question a vote on the question's quality? Why not just use the activity on the questions as a way to surface or subsume questions? Activity is a much better method than voiding or voting. Using activity blunts the prejudices and power of any single person or group of people.

The interleaving idea through the emails from Hubdub and site of not being a speculative market still stumps me. It’s a predictive market they are, by definition, speculative. It’s like being a fish and trying not to drink the water. Hubdub is a speculative market – if the problem is with questions that don’t have any news or are a long time in the future have a special section for these types of questions. Don’t ban them.

I did hope that Hubdub would be good. But I am sorely disappointed and I now doubt the company’s survival. I certainly would not invest any money in the company without some major changes to the platform.
Reblog this post [with Zemanta]

Monday, October 20, 2008

Chaos, Finance and Non-linear Behaviour

With all the noise around cause and remedies for the financial woes we are currently experience something has been niggling at the back of my mind. It was only after watching the BBC documentary "High Anxieties: The Mathematics of Chaos" on Chaos math and reading some posts on nodes that it twigged.

The idea that the financial system is fundamentally chaotic (in mathematical terms) has been around for a while so that isn’t new. A system being chaotic is not a problem in itself, it just is. The problem lies in the transition from linear to non-linear response in chaotic system. Here we had a situation that seemed to be out of portion to reasonable rules of thumb for the system response. The size of the sub-prime looses shouldn’t have been enough to trigger the meltdown.

Unless of course the system was optimised and being highly driven. The past 20 to 30 years has seen the financial system optimised for making money. Without getting into the whys, wherefores and who did it, the optimisation process pushed the financial system to the edge of instability. Optimisation moves a system closer and closer to instability, which is how you get “optimised performance.” This works fine when a system is steady-state without unexpected shocks. The downside is it takes very little amount of non-steady state change to force the system into instability.

In chaotic system instability creates non-linear response that is unpredictable. That is what we are facing. A relatively small shock has sent the system into non-linear response. The system was pushed to the edge of non-linearity by two forces: growth in connections between nodes and the hard driving of the system.
The interconnection of nodes grew exponentially via the creation of various 2nd and higher order derivatives. The intent was to “spread risk”. Instead it amplified the risk across the system which would in turn amplify driving forces. The second part was the cheap credit. This acted to effectively increase the energy sloshing around the system, driving it hard.

To stop future financial crises, we need to de-optimise the system. We need to make the system robust. Regulatory changes such as tying capital ratio to the incentive system of executives, whether a good idea or not, will do little in an optimised chaotic system as even a small shock can have massive consequences. Instead we need regulations that look at limiting interconnectedness of the various nodes in the system and work to dampen movement of a shock through the system.

Put another way, we want to shift the system into an area that has the broadest linear response to shocks as possible. Some possible ideas are banning all 2nd and higher order derivatives or counter cyclical capital ratios. There will be great resistance to making the financial system fundamentally robust as it will limit the money making ability of financial institutions.

Reblog this post [with Zemanta]

Wednesday, October 15, 2008

Apple's "New Technology" not so new

Image representing Apple Inc. as depicted in C...Image via CrunchBaseIt was interesting to watch the video of the Apple Notebook Presentation. The new notebooks are indeed items of engineering and design beauty.

But...

What struck me as very wrong was the claim around the new chassis for the notebooks. The claim on it being invented or new is simply and horribly wrong. The unibody is simply a variation of the monocoque technique that has been around in manufacturing for a long time. Aircraft have used the technique since the 1930's.

It might be new to computers but Apple's claim is high suspicious. My real question is - what took so long? Why are advanced (and not so advanced) manufacturing and design techniques only now coming to computers?

I suspect because manufacturing has always been a tertiary or lower concern. Now is the time for computer companies to grab a few manufacturing and mechanical engineers and lock them room and tell them pioneer new designs for chassis that take advantage of the latest manufacturing techniques, equipment and design.

One other thing. The emphasis on new chassis indicates that computer manufacturers have a lot of room to reduce costs and material usage in computers. It is going to interesting to see how companies take advantage of the possibilities offered by advanced manufacturing.



Reblog this post [with Zemanta]

Google Chrome's benefit for Mozilla Firefox

Mozilla Firefox IconImage via WikipediaMozilla release Firefox 3.1 beta 1 today. Reading through the release notes and the blog posts about the new beta it is clear that Google Chrome's biggest effect on Mozilla Firefox was to encourage the Firefox developers to setup innovation and development a notch.

Mozilla had been costing for a while. IE ceased to be challenger. Google Chrome has taken over that role (at least for now). If Google Chrome never gets more than 1% of the market, I would still call it a success by giving the FireFox developers the necessary kick in the pants.


Reblog this post [with Zemanta]

Friday, October 10, 2008

The Crisis Makes the Leader

Brad Feld wrote an excellent post about leadership for entrepreneurs. Fred Wilson re-posted a quote to his blog. Core is that time is now for leadership.

Until now leadership has been easy. It always is in good times. The current crisis will test the leadership skills of entrepreneurs. To paraphrase – the crisis makes the leader. This test is not going to be easy and I am sure it is going to find a lot of entrepreneurs found wanting. I hope that VCs and angel investors will step up to crease to back stop the entrepreneurs.

On the upside it will forge some great leaders that will be of huge benefit to the industry and society once we get out the other side. Something we sorely lack today.

For now it is the time to for entrepreneurs to step out in front their people, swallow their fear and charge forward. Now is the time to lead from the front.

Thursday, October 09, 2008

Spiral to Disaster and Financial Engineering

There is a lot of blame going around for the cause of the current financial crisis. It is a laundry list and often seems to reflect the prejudices of the pundits rather than a rational consideration of what happened.

The striking thing about this mess for me is how closely the crisis resembles a spiral to disaster. Spiral to disaster arose (from memory) out of the fire on the Piper Alpha oil rig in the North Sea. The inquiry into the incident found that while a condensate leak initiated the fire, it was actually the failure or lack of various fail safes that ended in the loss of so much life.

While the financial crisis was kicked off by the sub-prime problem in the US, the reason it has gotten so bad is the lack or failure of the fail safes. There has been nothing to stop the spiral downwards into an ever increasing financial disaster.

The aftermath of the Pipe Alpha fire was 100 recommendations to improve safety on oil rigs, which then went on to being accepted industry wide. We can only hope that the aftermath of the financial disaster will be result in sensible measures that act as fail safes to avoid systemic failure and stop the spiral to disaster.

Saturday, October 04, 2008

Widgets, Communities and the Edge

The web is making it easier and easier for groups and communities to form. Groups foster social cohesion by having members demonstrate affiliation and by the use of objects to create community identity. Think Star Trek fans wearing Star Trek uniforms at conventions or fans of Metallica wearing Metallica branded tee-shirts.

Unfortunately web based methods of indicating affiliation don’t really translate to the real world. This is important as groups are increasingly rooted in the real world, indeed traditional line between cyberspace and the real world is becoming increasingly blurry.

Personalisation services offer the ability to create physical objects that indicate affiliation and community identity. These services are centralised and therein lays the problem. By being centralised they impose a coordination cost on the groups.

Widgets offer services like MOO.com and Ninjazoo the opportunity to offer personalised and communitised products directly into the community without getting in the way. Widgets provide a means of removing the coordination cost on groups by meshing the service within the normal activities and sites of the group.

It is taking the mountain to Muhammad rather taking Muhammad to the mountain.

It is the distribution of core functionality where the true value of widgets lies. Not with the distribution of content but allowing web services to adjust to an Edge Economy.

Tags: MOO.com, Ninjazoo, Edge Economy, Web Services, Web 3.0

Friday, October 03, 2008

Data Half-life: Time Dependent Relevancy

Data Half-Life is not an indication of the importance of a particular piece of information. It is actually a measure of how long a piece of information is relevant. Relevance is not a substitute for importance. It is dependent on context and the information itself. So a low data half-life means that the piece of information will quickly lose its relevancy. A high data half-life means the relevancy will drop slowly.

Consider the story that Clay Shirky related in his keynote at Web 2.0 Expo in New York. In this story someone changed their relationship status from engaged to single. This information is highly relevant to some people and not very relevant to most others. Given that data half-life reflects the broader relevance of the information to a person’s network, it has a low data half-life. It is generally not relevant to most of the people in the network.

Now they many want to know or feel the need to know, that does not mean it is relevant to them. It is easy to mistake the desired to know or the need to know as relevant. Desire to know has no bearing of the information’s data half-life.

By having a low data half-life the relationship status will only travel only so far through the person’s network, thereby avoiding the result in Clay Shirky’s story. Data Half-life is represents how time dependent the information is. The more time dependent some data is, the lower the half-life and the less time dependent the higher the half-life.

Tags: Filters

Thursday, October 02, 2008

Privacy Filters and Facebook

In my previous post I used privacy in Facebook as an example of how data filters could work. One point I glossed over was how currently Facebook, indeed all social sites, fail with social distance. Unfortunately, social distance is a necessary for privacy filters to work satisfactorily.

Facebook has one major flaw, once a person is a friend in Facebook they are treated the same as all other contacts whether the connection comes from bumping into the person at a pub or someone you grew up with. It collapses the privacy or social distance between two people. The social distance can be considered how strong the connection between two people is. Social distance provides a measure of both strong and weak ties as articulated by Mark Granovetter.

Without some measure of social distance or strength of connections, any privacy filter is going to fail. The social graph fails to represent the real world connections between people properly.

Facebook attempts to use groupings of friends to approximate social distance but this is cumbersome to use. The manual nature of setting up and categorising everyone into groups is a major barrier to use. People are lazy.

What is needed is an automated method for calculating social distance. Social distance is calculated (and this is how Mark Granovetter categorised connections) by the frequency of communications. Measuring frequency of communications is difficult for Facebook. While Facebook can measure wall posts, internal emails, poking etc., so much more of our communication occurs outside of Facebook, outside of the wall; whether through email, IMs, phone calls, SMS, twitter parties attended etc.; that the frequency of communication within the wall is not a reasonable approximation for the wider frequency of communication.

The key measure of social distance – communication – is hard to quantify as it is dispersed through many different channels. Trying to capture the frequency of communication via porting the data in is one method of dealing with the issue. The other, probably more realistic, method is to start off with some rules and use what can be easily quantified to refine the measure of connection strength overtime.

The rules would look at what is known generically about social connections. Some of rules are:

  1. Married is a strong connection
  2. The same surname is a strong connection
  3. If strong connections to friends with which you have strong connections then you probably have a strong connection
Some of these rules will dictate a very strong connection (first rule) while others will dictate varying strengths dependent on factors such as prior connections with other friends (third rule). All connections start as very weak and are refined first by application of the rules and then overtime by measures of frequency of communication.

Privacy filters all start with knowing the distance between two end points whether physical in case of centuries before or by social distance in the case of today. Until Facebook and any other social-based site has a measure of social distance privacy filters are going to be mediocre at best and more often prone to failure.

Tags: Privacy, Facebook, Filters

Friday, September 26, 2008

Failure of Filters

The title from this post is taken from the keynote that Clay Shirky delivered at the NY Web2.0 Expo in September 2008. The premise of the keynote is that the “information overload” we are facing is not a problem but a fact (one that has been around since Gutenberg and his movable type press) and what we are seeing now is the collapse of the traditional filters that mediated the information overload.

The existing filters for information were founded in the difficulty of moving information over distance. The various communications technologies of the 20th Century have steadily eroded the tyranny of distance. The web completed the destruction of distance filters by removing all concept of spatial distance for information.
Our sense of privacy is again bounded up in the hassle in moving information over distance. This physical distance is the basis for the whole concept of privacy. The closer we are to other people the less privacy we expect. We found that to be a reasonable rule of thumb as those closest to us (community, family, friends) are likely to spatially close to us. We only now need privacy safeguards because the rule of thumb no longer applies – spatial distance is meaningless for information now.

Information overload and privacy issues are a rooted in us expecting that filters based on spatial distance to continue working in a world where information has no spatial component. Any filters built with this expectation don’t work. Instead we have to create a new framework for filters that don’t rely on spatial distance.

By borrowing ideas from science we can create a framework that doesn’t rely on spatial distance. The framework is based on data half-life, data permeability and data potential. Data half-life is the measure of how long the bit of data takes to lose half of its relevancy/ importance. Data permeability is a measure of how hard it is for data to move over a period of time – think fluid moving through a filter. Data potential is the initial potential for the data to move – think potential energy in Newtonian dynamics.

The interaction of these three parameters determines how far and how quickly information can travel within an environment where spatial distance has no meaning. An analogy will help illustrate how the parameters behave together to filter information.

Let’s say we have some information – death of the chief of a village. The village has good roads and the news is to be sent by horse. This information will go far as it important (chief of a village), it is easy for the information to move on the road and the horse is quick. If, however, the death is not the chief then the news won’t travel as far it is not as important. It is the interaction between the data half-life (how important the person is), the data permeability (how easy it is to move the information) and data potential (how fast the information can move) which determines how far the information will travel.

Changing the parameters creates a varied set of filters that determines how far and how fast information will defuse. Each connection has a level of data permeability with information coming in assigned a data half-life and a data potential. The information only passes the filter when the data half-life and data potential are enough to overcome the data permeability.

To illustrate consider changing your relationship status in Facebook. If someone changes their status from relationship to single they don’t necessarily want the information to spread quickly through their “facebook friends” as their friends will include work colleagues and friends of friends only met once. Instead each of their connections should have different data permeability and depending on information (data half-life and data potential) it will show up in some of the connections news feeds right away, some in days, some in weeks and others never at all.

There is no single way to create and calculate data half-life, data potential and data permeability. Various developers will come up with their own methods. Some of which will work and others that won’t. Hopefully further down the track we will see a standardisation on calculating the parameters based on accepted criteria for each type of information – personal, communications, knowledge etc.

Tags: Filters, Information Overload, Privacy, Clay Shirky

Monday, August 18, 2008

Technology and moral responsibility

"Now I am become Death, Destroyer of Worlds"
-J. Robert Oppenheimer (from Bhagavad Gita Krishna)

A recent conversation with a friend has got me thinking about the oft used phrase "technology is amoral". Usually used to justify why research or a technology should be pursued when there are foreseeable abuses of the research/technology.

I've used the phrase myself without really thinking what it means. Until now.

The phrase isn't a justification but rather a cop out. A cop out for the researcher or technologist involved from thinking the ramifications of the research or technology and ultimately taking responsibility for possible misuses of the technology or research.

Every researcher and technologist must consider the moral ramifications of what they are doing. It is not straight-jacket of right or wrong. But rather a thought experiment to understand the ramifications of the research or technology being pursued. The aim is to answer questions of what are the moral ramifications of the research or technology? What could happen if things went wrong? What could happen if it was used immorally? Do the benefits out weight the risks?

Being unable to answer those questions about research or technology being pursued is a gross act of negligence. To often justified by the phrase "technology is amoral".

The phrase should really be "technology is amoral, but technologists aren't". As scientists and technologists we must take moral responsibility to how our research and technology is used.

tags: Technology, Morality

Thursday, August 07, 2008

Can Silicon Valley Save the World?

(ed. this post is inspired by a recent trip to Namibia)

Can Silicon Valley save the world?

No.

Silicon Valley has become divorced from the realities of a vast majority of people's lives. There is simply no sense of what is important to people's lives. Silicon Valley has become about technology for technology's sake. And technologies value is only realised when it is applied to solve a real problem. Speeding up how quickly you can print solar cells by 2% is not a problem. How to make it easy for rural farmers in Namibia use solar power is. Human/donkey powered harvesting machine that does not require power other than that of human/donkey sweat is a real solution to increasing the productivity of small african farms.

Umair Haque and Robert Scoble have both pointed to the malaise within Silicon Valley. Umair even set out a challenge to Silicon Valley to solve the real problems. I have serious doubts whether Silicon Valley can answer the challenge. Being unable to answer the "call to arms" will have serious effect on the influence of Silicon Valley. The money is in solving real problems and real pain. Another twitter is not going to make money.

My pessimism comes from the ivory tower aspect to Silicon Valley, the disconnectedness. If you are blind to real problems and real plain how are you going to solve them? How are you going to develop the technology that solves the problems? To often engineers and VCs get dazzeled by the technology and not how the technology solves the problem. Quickly the technology is pushed forwarded with little consideration to how the development of the technology will actually solve the problem it was designed to solve. Soon the technology no longer solves the problem but has become the end in itself.

Technology only has value when it solves a problem. It might be a materpiece of engineering but unless it solves a problem it is worthless.

It is possible to reverse this illness. The cure is simple. Travel. By travel I don't mean going to conferences in far off places. By travel I mean get out and see the countries your are visiting. Get off the beaten track and mix with the locals. Go backpacking. Traveling is one of the few ways where you bump up against the problems, where you can have the opportunity to see people's pains first hand. Documentaries are a poor shadow of real travel.

It is fair to ask what are real problems. Here's a (very) short list:

  • How do you increase the productivity of small farms without chemicals or expensive fuel? (hint: human powered mechanics)
  • How do you make it easy to install solar power? (hint: corragated iron)
  • How do you keep mobile phones charged cheaply when there is no mains and no solar chargers? (hint: hand crank radio)
Solving these problems would make a huge difference. Solving problems like this is independent of where the problem lies. These are problems (even if masked) across the world.

tags: Umair Haque, Robert Scoble, Silicon Valley

Thursday, July 31, 2008

Pulling out of recession

Most of the developed economies are facing a slow down if not a recession. Mark Cuban has described his method for pulling an economy out of recession. I broadly agree with his idea but I think that with an addition it address the biggest issues facing startups.

Startups (and small business as well) face two major hurdles or barriers to entry. The first is regulatory and the second is capital. Mark Cuban's plan address the regulatory barrier to entry and some of the capital issues (not paying taxes improves a companies cashflow). But it only goes so far. Not paying taxes is moot point if you don't have any revenue to support yourself.

To address the capital barrier, governments should implement a HELP-style business loan scheme. In this case the government would provide an initial loan that covers a single person's wage for a year. The loan could be renewed for a 1 year extension. The idea is that the load allows a small company to meet salary for initial employees during the launch phase when little to no revenue is available and other sources of funding are not useful.

The loan is then paid back through the tax system. The scheme would have several limits such as being limited to the average yearly wage and a company would only be allowed to ever have 5 employees use the scheme. Individuals would also need to be limited in the number of these loans that could be taken out within a time period (say once in a 5 year period).

The scheme is designed to allow people to take the plunge and spend that all important first year to get of the ground. It is very similar in intent to Ycombinator fund.

With Mark Cuban's plan addressing the regulatory burden faced by startups and small business the loan scheme addressing the initial capital requirements, people will find it easier to start a business and get it through that all important first year.

Economy, Mark Cuban, Finance, Policy

Intersection of the web and tangible objects (manufacturing-as-a-service)

MIT's Technology Review have an interesting article examining the rise of web services that allow users to design and make objects via the web. These services have the potential to disrupt traditional manufacturing. While at an early stage its useful to consider what disruption these services could lead to.

This disruption potential comes from two different causes. The first is the rise of micro-businesses around manufactured products (I am excluding already existing craft companies which simply use the web to sell their products). One example company would allow users to find and order fittings such as taps and facets that are no longer being made by the original manufacturer. The company creates and ships the taps and facets on demand to the user. Think printing-on-demand but for products.

Manufacturing based web services open the long tail of manufactured products. Suddenly, manufactured products never really stop being made just like books now never really go out of print with print-on-demand services from Amazon et al.

The other cause of the disruption is the way these web services change the economics of manufacturing and how this ties into increasing conciousness of energy/carbon cost of manufactured products. Why buy simple products like plates, pots, pans etc. that are made in China and shipped half-way around the world when you can purchase the same products at roughly the same price that is shipped from someone down the road?

Carbon concerns will change the economics of manufacturing by making smaller factories closer to large markets more economical than large centeralised factories. Web services based around manufacturing will further erode the economics of large scale factories by removing the need for companies to purchase or rent expensive manufacturing machines. These companies are the first of manufacturing-as-a-service.

Manufacturing-as-a-service removes the equipment and capital barrier to entry for new product companies. These companies can focus on product design and building the community around the product and leave the operation of the manufacturing hardware to manufacturing-as-a-service guys.

There will be a range of responses to manufacturing-as-a-service. Some manufacturers and factories will ignore it only to be over taken by the tide. Others will embrace it and will aim to become the Amazon Web Services of manufacturing-as-a-service. What the response will be is largely going to be determined by the DNA of the company.

Manufacturing-as-a-Service, Web Applications, Trends

Tuesday, June 10, 2008

The iPhone 2.0 is NOT a mobile phone

One of the big concerns I have with the analysis of the mobile phone (for example Om's round up here) is the implicit assumption the iPhone 2.0 is a mobile phone.

It is not.

The iPhone 2.0 is a mobile computing platform. Why do you think the keynote spent so much time looking at the Apps?

Its the mobile computing platform is the game changer. As a straight phone (even smartphone) the iPhone has great usability but is only so-so in terms of extra features (as many a blogger will tell ad nauseum). But as a mobile computing platform nothing compares.

More importantly, Apple is repeating the iTunes/iPod strategy of building a seamless end-to-end system. In this case it is a seamless end-to-end mobile computing platform. One that includes development, hardware and distribution of the applications.

Apple is pursuing an edge strategy that re-defines the general idea of mobile towards the definition used by Tim O'Reilly in his recent Web 2.0 keynote in San Francisco, 2008. It drives innovation to the edge and upturns the existing industry.

iPhone 2.0 is NOT a mobile phone. It is a mobile computing platform.

Tags: iPhone, mobile phone, mobile computing