Lightweight Models and Cost Effective Scalability

When considering this weeks pattern of lightweight models and cost effective scalability my initial candidates were services that I had already discussed. Facebook was my first thought but that was a prime example of software above the level of a single device. Youtube was the next thought but that was discussed regarding data being the next intel, Amazon leveraged the long tail and eBay innovated assembly. This became a real head scratcher and as I so often do I turned to Wikipedia and then, eureka!

Wikipedia not only satisfies the pattern of a lightweight and scalable web2.0 application but it would also have been appropriate for just about everyone of the seven preceding patterns. This collaborative online community built encyclopedia project was convieved back in 2001 as a feeder site to the already established Nupedia, a scholar based collaborative effort. Wikipedia was built on the idea that anyone could create, contribute and edit the articles and by the end of its first year it had amassed over 20,000 articles in 18 different languages. Today Wikipedia boasts over 15million articles, 3.2 million of which are in english, and recieves anywhere from 25,000 to 60,000 page requests per second. Its this exponential growth that has shown how lightweight and scalable Wikipedia has needed to be in order to keep up with its rapid rise to the number 6 website on the internet.

Being totally user driven the staff behind Wikipedia is scarce to say the least. Its parent company the Wikimedia Foundation only employs 39 people to perform the technical and legal tasks while alot of the moderation and administration is conducted by volunteers. The bulk of Wikipedias funding is gained via voluntary contributions to the service and grants from various industries such as universities and government. Being a not for profit venture revenue is purely to pay for the hardware needed and the employees to maintain it.

In terms of the effective scalability Wikipedia started out with a single data server with a custom made open source wiki software built ontop of the MySQL database technology. This single server stayed inplace up until 2004 when demand became too great and the transition to a multi tiered architecture took place. It has been through 3 subsequent phases of development as demand grew until 2002 when the MediaWiki interface was built. This model was designed in such a fashion that it was able to be modified and updated without the need of a major rethink. The next major change was in 2005 with the introduction of the Lucene database search technology and the scrapping of MySQL. The physical home for Wikipedia is an impressive array of some 300 Linux based servers based in Florida and another 44 in Amsterdam which are all referenced by front end Squid caching servers.

Wikipedia no frills approach to the appearence of the website is aimed at being functional and intuitive for the user to interact with without being overly taxing on resources and bandwidth. The majority of the bulky content such as images and video are hosted externally which reduces the amount of storage Wikipedia needs to employ as well as enabling the faster page load times and search results. Wikipedia is a true example of web2.0 and the power of the user. Transcending the desktop computer and becoming mobile it has become many peoples first stop for information. Its global adoption and rapid rise to one of the most visited websites has shown that the model behind Wikipedia has been built on a lightweight and easily scalable principal that has ensured its not only survival but flourishing web presence.

When someone becomes a Wikipedia contributer it gives the sense of being part of something big, something with a positive and promising outlook. The open forum gives infinite possibilities, anyone with an idea or an expertise can share their knowledge with the world and collaborate with others to build something unique. For a non profit organisation to achieve this global status shows just how cost effective it really is.

Wikimedia Foundation organization chart

File:EnglishWikipediaArticleCountGraph linear.png

Advertisements

Leveraging The Long Tail

Leveraging the Long Tail is a modern marketing strategies brought about by the dot-com era. Without going into great detail the actual leveraging of the long tail is  basically being able to sell the hard to find not so popular items with relatively small outlay. That is to say that the online retailer doesn’t have to maintain one or multiple brick and mortar store fronts, consolidating its inventory in the one location thus enabling them to carry a larger variety of items that aren’t the Twilight saga. The best and by far the most successful example of down right exploiting the long tail is of course Amazon The brain child of American Jeff Bezos, Amazon.com was conceived all the way back in 1994, before going live a year later. Bezos was eager the jump on the dot-com boom and Amazon was going to be his meal ticket by being able to offer the consumer a safe, simple and rewarding browsing and ultimately shopping experience.

Amazon.com does not physically hold a single book in stock, ever. I didn’t know this at first but once it had been pointed out and explained to me its brilliant! Amazon is merely a search engine that has cataloged all of the publishers around the worlds inventory and facilitates the transaction between them and the customer. This means that the book is shipping directly from the publisher, or seller, to the buyers door, without ever going through amazon. This means very little overheads for Amazon in terms of storage and distribution however it has invested astronomical amounts of money into the development of its web services infrastructure, and has now evolved into an API.

Diversification has been another major aspect of Amazons sustainability. Originally started as an online bookstore their range of products gradually grew to include CD’s, VHS (what’s VHS again? :P) and DVD’s but now extends to electronics, furniture and so much more. They have adapted their proven method of  connecting sellers of products with buyers globally to include just about anything you can fit in a box and FedEx.

Alexa.com ranks Amazon.com as the 21st most popular website in the world and at number 7 in the USA. With a staggering nearly 7millions hits per day its any wonder that it has become a ‘household’ name amongst the internet shopping savvy. Their ability to list more titles then any physical store and be able to offer competitive pricing is what has enabled them to bypass their physical competitors such as Borders books and Barnes and Noble in revenue. This coupled with the convienice of being able to just click a link any time of the day and the titles on its way has meant that fewer people are going into book shops.  This graph shows the comparrison for this year: longtail.gif

These material book stores rely on the first part of the long tail principal that the most popular titles will sell in volume with the not so popular ones going by the wayside. This literally means having 10000000000 copies of the new Twilight DVD and  not bothering to carry stock of A Clockwork Orange DVD as every space on their shelves is costing them money. But to put this all in perspective for Walmart to put a CD on its shelves they need to sell at least 100,000 copies of that CD to cover the cost of buying the stock and paying someone to put it on the shelf, and then process the sale. This means that Beyonce will be in every Walmart store while little known Aussie band Dead Letter Circus will never be available in store. This means online music stores like iTunes or Amazon.com is their only hope to get the records out there.

It doesn’t cost Amazon.com anything to have the Dead Letter Circus record on their virtual shelves and as such they will always be able to list it even if they only sell one copy. Thats one sale HMV missed out on by being limited to the finite amount of space on their shelves. This long tail of niche content is only available thanks to the internet and the online stores that have realised the market out there for it.

https://tobycox.wordpress.com/wp-admin/post.php?post=27&action=edit

Perpetual Beta

Perpetual beta is a different approach to the development and testing of web applications. This differs from the traditional style of beta testing conducted by companies developing an out of the box and installed piece of software. This idea of perpetual beta being used in the transition into a browsing experience dominated by user participation and increasingly rich content has only been available by the advent of web2.0. This evolution of the internet means that delivering large and complex applications to the user giving the experience of a installed piece of software. My favourite example of this is recently is using Pixlr, a browser based photo editing software. Having used photoshop extensivley for over 5 years I think it is a truly impressive service.

The best and most recognised example this style of development is of course Googles Gmail. Google launched gmail as an invite only live working beta service way back in 2004. This smaller initial launch of the site meant live testing with instant statistics and feedback from the users, who are essentially test subjects. I didn’t start using the service until it became public release early 2007 still sporting the beta title. This beta version that i was experiencing provided a better experience then the hotmail which without a beta tag must have been a finished product right? What i am trying to say is that this “work in progress” suggested by the beta tag was in no way different to that of established service e.g hotmail. this isn’t to say that Hotmail has been static for any period of time it constantly evolves and changes at a similar rate. This is title that Google employs with most of its new products like my previously discussed new favorite Googles wave.

The reason Google is using this method of a pre-release working version that eventually evolves seamlessly into the new and improved gmail non-beta, is they are developing an entirely different principle of software design than that used of traditional on the shelf programs. Google is rapidly dominating the information industry by diversifying its web presence so much that they have built an entirely browser centric operating system, Chrome OS. This is what differentiates Google from older generation software companies like the ever enduring Microsoft. Although anyone who’s used windows knows that its really perpetual beta in disguise with the windows update utility 😛

Traditional software development employed the use of beta testing in a walled garden by developers or the select invited few. This is a costly and time consuming phase of software development however is critical to ensure a reliable product for the paying consumer. This software is then installed onto the PC and runs locally; all of the necessary data stored on the hardware or disc. This is expected to work without the need to be updated or fixed.

When building an application based in the browser none of this data is stored locally on the machine rather accessed and viewed via the internet. The user is always using the latest version without the need to download and install “service packs”, also meaning everyones using the same version. This shift from the use of local applications to efficient and intuitive web based technology will prove to be an exciting time coupled with the growing interest in the use of cloud computing. The potential for enterprise systems to become browser based means that perpetual beta is going to become a wider used method of development.

Software Above the Level of a Single Device

The best example of this that I can think of that is a plague sweeping the globe in my opinion is, that’s right, Facebook. This social networking juggernaut has long since left the humble desktop as its home and spread its clutches into mobile devices and even games consoles.  With telco’s offering free Facebook as an incentive to sign on for a 24 month contract on their new iPhone plan its perpetuating this trend and further engraining into everyday society.

The Facebook app on the iPhone and indeed nearly every other smart phone on the market is enabling people to update their statuses a million times a day as well as making mobile uploads of pointless photos possible! Not only that but upon purchasing my new HTC phone it gave me the option to sync my contacts list with everyone’s Facebook pages meaning I would always have a fun up to date picture of who ever was calling me! Needless to say I resisted the urge to key in each persons email and tediously make it happen but the fact its a possibility even I will admit is impressive.

A similar function is built into the PS3. When you create your PlayStation network account it gives you the option of syncing it to your facebook so you can do a status update between every round of Call Of Duty I play! Now that’s a must!

Setting aside my dislike for facebook I will admit that it is a truly impressive service not only in terms of its sheer scale and user base but also with its ability to transcend the desktop computer and become a native app to so many devices. They have been a major part of pioneering this multi platform user experience enabling its consumers to maximise its functionality and personalize the service to their individual requirements.

This style of software above the level of a single device is proving to be an exciting and revolutionary step forward in the way people interact with the internet. Some of the companies i have already discussed in previous blogs are doing the same thing with YouTube being possibly my favourite mobile app of all time! But also eBay has gone truly mobile so it’s becoming fairly obvious that to be successful, or rather become part of your users daily life this kind of mutli-platform service is the way of the future.

PS, so glad we are now using NING for our class discussions, now I can delete my Facebook!

Rich User Experiences

One of the newest and most exciting web apps that provides a truly rich user experience is googles “Wave”.  Acting as a hosted conversation tool google wave is googles attempt to revolutionise email and the way in which people communicate and collaborate. By simply starting a new wave and adding people to it you can then watch as the participants contribute the wave in real-time adding text, photos, links and widgets. It has a raft of potential uses both for the corporate and social sectors whether its to organise a party or working together on a project. I have just started using it myself as a collaboration tool in my mobile devices subject for our group assignment and its proving to be extremely good!

We can all work on the same document at the same time as well as being able to chat about it. If you leave the wave and return to it highlights all of the changes and tells you who and when they were performed. It’s also one of the first web apps I have seen that enable the drag and drop ability of a program running on the machine itself. This in itself is excellent as there is no need to use the slow one at a time browse to upload tools or the often sluggish bulk upload utilities available on many of the forums I use.

It’s also taking advantage of googles many other services they offer by interacting with apps like google maps and gmail but also external services such as  youtube and twitter with the ability to embed the wave into blogs etc. This means that someone who is an active blogger can share more of the online content they have generated with their social network as well as ensuring a rich and full experience. It also helping to transcend language barriers with a built-in real-time chat translator. This means companies with offices around the globe which employ non english speakers can collaborate and get their point across effectively.

 This is this video is the most concise overview i have found that explains the key features of wave as explained by a pair of developers.

http://www.youtube.com/watch?v=p6pgxLaDdQw

Innovation In Assembly

This is a harder topic to talk about since I am lucky enough to be in the younger generation where these services and concept is common place. When things like amazon and ebay have been around since I was 10 I have never stopped to think what the web was like before these architectures where invented. To be online automated billing and despatchin of goods is standard and being able to access and use massive live datasets commonplace. When I ran through the list of sites I use regularly and tried to apply the pattern the most obvious and one I can relate to the most is ebay.com.

Being a totally user powererd service that is used in the public and private sectors, satisfying people need for convenience and drive for personal gain. Where before you would have just thrown away that old TV or given away that pair of diesel jeans you dont fit anymore its a couple of clicks and they’re sold. The real innovation in the site is the toolsets that a given to users to not only generate an attractive add but to build their own unique ebay store if they wish too. Making limited development tools usable to everyone means that they can get the most out of the service.

Ebay has also extended its service to be able to have embeded advertising with actual running relevant auctions that play on the users impulsivness. Like the candy in the checkout line in the supermarket when your standing in line, not something you go out of your way for but since its there, you see it and it seems like a good deal you grab the snickers bar. This is cheeky marketing tactic that works.  Thats a bit off topic but I find it clever.

The ebay model is something that has been employed by other online auction sites such as oztion.com.au and bidmate.com.au to name two australian companies. Although they are their own web services each developed by their own companies the looks, feel and functionality of the services are very much ebay. The user interaction with the service and ability to personalise their account makes it more user friendly, making it their own and something they will find easier to relate with and use. They also record what categories you have viewed in the past while you are logged in and will show you auctions of items that you mite find interesting, indicative of web 2.0 and ebay.

Ihaven’t found it so hard to find good examples of the innovation in assembly pattern I have just found it hard to discuss it. The concept makes perfect sense and it would seem obvious to work on such a pattern. Developing something that will not only grow as more people adopt it and become comfortable with it but also help improve and further develop it means sharing some of the work and research all be it un knowingly with the consumer. Peoples desire to continuely alter, tweak and personalise things is driving them to build web services that are becoming more integrated and usable. Companies like amazon have realised that the consumer wants to take part and contribute and by giving them the tools to do so it has enriched their service exponentially.

It seems in the case of google earth and a number of other services this idea of opening the technology up to the public and allowing them to essentially hack it freely seems to have been done retrospectively; which makes you wonder how much innovation was there in their assembly? It wasnt until users forced their way into google earth that they realised that the 100 or the 1000 people they have on the payroll working on this stuff is good,  but what about the 200 million people who are potentially going to use it? Google might employ some of the brightest minds but not all of them. Innovation in Assembly to me implies that the innovation was done in the actual assembly phase. To me with alot of the services its more like renovation post assembly. Although that is the point that the googles and the amazons have done the research, the development and come up with a model and now all we have to do is take note and get it right from the start.

Data is the Next Intel Inside

After reading this weeks facebook discussion regarding the topic my favourite exaple of the Data Inside pattern personally is youtube! I mean seriously who doesn’t LOVE youtube! Its one of the most data rich web 2.0 services available and it has become almost a global standard, even more so then facebook. Its transcended the PC to mobile devices and even into your TV’s, DVD players and the humble camcorder. It’s doing more then aggregating data its harnessing it and making it not only available to the masses to view but to contribute. It doesnt get much easier then taking a video on your phone and hitting upload to youtube. It also doesnt make it much easier to use then hitting the icon on my phone and be browsing todays most viewed in seconds! Providing I have 3G coverage 😛

Youtube has found its way into everything and its not just the browsing experience its also the tools and diversity of the whole service. The ability to upload a video and then embed that into your blogs, forum threads, websites, what ever! Not only that the shear amount of data is mind numbing. As for the who owns the data debate and what is it all worth? The dollar value is easy it was bought by google in 2006 for $1.65 billion, and google estimates it to be worth in excess of $4.5 now. But is that amount for the content, or is it for the name? Youtube is instantly recognisable and everyone automatically makes their own associations with it. Weather its the funny vid that they watched on there last night or the clip channel 7 used in their 6pm broadcast. It engrained into everything, kind of like the intel chip.

Intel is more or less the PC industry standard. Now that Mac is onboard its all down hill, with the exception of AMD of course but even they are becoming rarer. Just like youtube is inside everything. It quite literally has become the intel inside for the web2.0 standard.