Social media usage following the Christchurch earthquake

A few random thoughts on some of the applications of social media following the Christchurch earthquake. I’ll attempt a more meaningful formulation soon.

In the first minutes, real-time tools like Twitter and TwitPic really came into their own. First the news broke that a major quake had occurred. Reports followed of its magnitude, and then the first pictures and videos started to come in. Within 15 minutes, #eqnz had become the accepted hashtag. Despite the massive fragmentation of sources, thousands of people all over the world were watching the story evolve in real-time, reading the same tweets and viewing the same images.

In those first minutes, the thing most people were concerned about was finding out what the hell was going on. As such, open and real-time were absolute requirements. Several people and organisations recommended Facebook pages as useful resources, but in this instance they were way off the mark. I don’t want to become a fan of (sorry, ‘like’) a page in order to report a missing pet or loved one, and I want my plea for help to have the widest possible audience – not just people who have also signed up to that page.

It was interesting to once again observe a kind of passive aggressive turf war between proponents of alternative hash tags. #eqnz emerged almost immediately, followed by the Earthquake Commission’s proposed #chch. Why the hell they felt the need to upset the apple cart is beyond me.

I’ve read a few posts lately with people bitching about the continued use of the now-defunct ‘RT @…’ method of retweeting. Personally I couldn’t see what all the fuss was about, and really didn’t give a toss one way or the other until yesterday. Looking through the #eqnz stream in search of new information was a nightmare, due to the thousands of RT-style retweets that would otherwise have appeared as a simple numerical increment appended to the original tweet. RT-style retweets enter the stream in the same fashion as the original, but with a later time stamp. When people are looking for real-time information, wading through thousands of hour-old (oh, how our expectations have changed) posts is a painful and unnecessary drag. I’ll never use the RT style again. Who’s with me?

It seems every man and his dog tried to get their favourite celebrities to retweet links to the NZ aid organizations, and many did so (Kudos to Simon Pegg, Nick Frost, Stephen Fry. Screw you, Oprah). I can’t help but wonder if these pleas were a genuine attempt to leverage star power in a time of need, or a sad form of 21st century autograph hunting. Nobody collects signatures anymore – it’s all about @ replies and retweets, don’t you know. Does it matter? No, but it’s interesting to me at least.

Within minutes of the first pictures of the collapsed Christchurch cathedral coming out, its Wikipedia page and image had been updated to reflect its current condition. Not only that, Wikipedia administrators had flagged the image as a candidate for removal due to its dubious copyright status. Wikipedia is such an efficient animal, especially in times like this (reminded of similar instances in the case of Steve Irwin’s death, and Pluto’s demotion to non-planet status). This never ceases to amaze and impress me.

Once the mainstream news organisations started to get a handle on this situation – for the first couple of hours they were essentially re-publishing information and images sourced from Twitter – social media usage seemed to shift into recovery mode. People wanting to locate missing loved ones were tweeting their names and possible locations, and people on the ground were attempting to find them. Several wiki and wiki-like projects kicked in, creating centralized registers of the missing and the found. Wikis were an ideal technology to use now, to balance out the noise and evanescence of the Twitter stream.

The utility of some of these digital tools also provided a way for concerned people all over the world to get involved. I looked across my office at one point and saw a colleague scanning the Twitter stream for reports of missing people, updating a missing persons’ wiki. Around a dozen people were working simultaneously. Where and who were the others? I have no idea. It doesn’t matter.

Misinformation is always a challenge with social media, and yesterday was no different. When reports came in of damage to the Christchurch cathedral, the accompanying pictures were actually of a different church that had been totally destroyed. Some well-meaning soul assumed blood would be in demand, and put the word out that donations were urgently required. The resulting flood of offers – not needed, thanks to regular donors such as myself – placed undue pressure on the Blood Service, who were forced to divert attention from their task at hand and respond with their own assurances that blood stocks were fine and dandy. It’s not all bad news though – because the misinformation was largely on Twitter, the Blood Service was prompted to create their own account (@nzblood) in order to join the conversation. Here’s hoping they stick around.

Misinformation is by no means the sole domain of social media. Last night I was appalled to see TV3 news anchor Hilary Barry announce ‘unconfirmed reports’ of a death toll as high as 300 to 400 people. Unconfirmed reports? Why not preface with ‘a bloke in the pub told me’? If this is what counts as journalistic integrity in the 21st century it’s no wonder the old media establishments are struggling. If I’m going to soak up a bunch of speculation and heresay I might as well get it for free and without a 15-second ad at the start.


Simplex methods and the problem of endless consumer choice

One of the more interesting projects I’ve worked on lately involves a large retailer looking to improve their online sales channel.

As it stands, their website covers a large number of brands, and within each brand there are many categories, sub-categories, products, and product variants.

For client confidentiality reasons I won’t name the company or industry, but it’s a common enough situation. For example, in the clothing industry there are brands, styles, men’s and women’s lines, and cut and colour variations.

Traditional approaches to helping customers explore deep and varied product offerings typically centre around hierarchical browsing, and search – entering keywords and/or selecting one or more criteria from the brand -> category -> sub-category -> product schema to generate a list of products that satisfy the specified criteria, and then evaluating each option in turn.

Many would argue that there is a reason that these two modes of exploration are the norm, and that there is no need to fix that which ain’t broke. Screw ’em.

While pondering the ifs and hows of improving on browse and search, it occurred to me that both are subject to valid criticism when it comes to efficiency. Regardless of whether you’re browsing a catalogue or reviewing search results, the act of looking at an individual product is useful for eliminating unsuitable options, but doesn’t bring you any closer to finding the product that best suits your needs. You may chance upon a product that ticks all your boxes and so you decide to stop searching, but there is no basis for knowing if your choice was optimal – if you’d looked at a couple more options you may have found a better product for half the money, but you’ll never know.

According to the Secretary Problem, the best you’ve seen after sampling 37% of the population is likely to be the right one for you, but this isn’t particularly helpful when there are thousands of products for you to chose from.

Either way, traditional search and browse methods are woefully inefficient. What we need is a mode of exploration whereby each time an option is evaluated, it significantly increases the odds that the next option we evaluate will be the optimal one.

It seems to me that researching a purchase is essentially a complex optimization problem, whereby consumers are looking to find a product that maximizes some attributes and minimizes others, within a set of constraints that includes factors such as cost.

Now, when there are very few options to explore, you can easily get away with assessing the value of each one, ordering highest to lowest and picking the one at the top of the list. It takes a little working, but we know this is the optimal choice because it’s the one that yields the greatest value within the specified constraints. There may be other choices, but all will provide less value and/or be outside your constraints (budget etc).

When the number of options is very large, however, this approach just doesn’t work due to the near-infinite number of product attribute combinations that need to be evaluated and ranked. The Travelling Salesman Problem is a great example of how a seemingly simple optimization problem can turn into a computational nightmare once you move beyond a few variables.

The Simplex Algorithm (devised by the great George Dantzig, a.k.a. Will Hunting) is quite possibly the most outlandish mindfcuk ever conceived by man. I managed an A in the Linear Programming paper I took in my final year as an undergraduate, and by necessity knew it well. I could (and did) do this stuff all day, right up until the day after the final exam. Then I needed to free up some brain power to resume vital functions such as breathing, preparing for post-graduate study, and playing Three Man, and promptly forgot the lot. Yet I digress…

Loosely speaking, the simplex algorithm is a very efficient mathematical technique for solving complex optimization problems. Rather than evaluating and ranking all possible alternatives against a stated objective function, the simplex algorithm runs along the lines of:

Start out with a basic feasible solution (visually thinking, this would be a point on the boundary of an n-dimensional shape formed by plotting constraints with n variables, see image below). From this point, look at the slope of your objective function. If it’s decreasing (for reasons I won’t go into, optimality is achieved when the objective function is minimized), go to the adjacent point that shows the greatest decrease in the objective function. Keep doing this until you reach a point where the objective function is no longer decreasing. This is the optimal solution.

A system of linear inequalities defines a polytope as a feasible region. The simplex algorithm begins at a starting vertex and moves along the edges of the polytope until it reaches the vertex of the optimum solution.
A system of linear inequalities defines a polytope as a feasible region. The simplex algorithm begins at a starting vertex and moves along the edges of the polytope until it reaches the vertex of the optimum solution. (Source: Wikipedia)

I believe that a similar approach would work wonders if applied to the problem of consumer choice. What if, instead of expecting a website visitor to browse an entire product offering or wade through hundreds (if not thousands) of search results, we instead got them quickly to a single ‘best bet’ product to evaluate (a ‘basic feasible solution’ that meets their criteria but may or may not be the optimal one), and used that as an anchor of sorts, to guide them towards finding the optimal product to buy?

In practice, it could work like this:

Get the customer to provide an indication of what they’re looking for. Instead of an ordered list, return a ‘full details’ view of a single product, accompanied by a limited set of related products that also match the customer’s criteria (the adjacent points on the n-dimensional spaced bounded by the customer’s constraints). Ask the customer to indicate which of the related products is closer to what they were looking for than the product they are viewing now. Repeat until the product being displayed is better than the alternatives on display, in which case we know that the product on display is the one the customer ought to buy.

Now, I know there’s a ton of mind-numbingly complex calculation behind this stuff and I’m not suggesting that a literal application of the simplex algorithm is what’s required here. But as an alternative approach to the problem of helping prospective customers to find the product that best suits their needs, and giving them comfort in the optimality of their selection to get them over that pre-purchase hump, I think there’s a lot to be gained from considering this idea – particularly as an alternative to the traditional, lazy, and costly (in that many people just give up and leave) approach of presenting a product catalogue or list of search results and leaving them to their own devices.

I don’t know, maybe I’m just going nuts. What do you guys think?


What if Google used this approach? Instead of delivering a list of sites, a search would take you directly to the site Google figured is the best answer to your query, with the next-best sites shown in an overlay frame, allowing you to continue your search by hopping from one site to the next (the set of recommendations improving with each click) without having to return to the results page. Possibly not great from a revenue perspective, but as a user it’d be pretty cool, no?