Cool Stuff

A few (signed, limited edition) screenprints I’ve bought recently by two of my favourite artists, Shepard Fairey and Ernesto Yerena. I have nowhere to hang these right now, but figure by the time I do these will be way out of my price range so best to buy them now. 

Image

Shepard Fairey, Sedation Pill

Image

Shepard Fairey, Compton’s Most Wanted

Image

Ernesto Yerena, Lion of the Dead

Image

Ernesto Yerena, Yaqui Dia de los Muertos

Why the Big Day Out is dead

This year is the last time we’ll see the Big Day Out here in NZ. Theories abound as to why the event no longer attracts its once cult-like following, but if you really want to understand you need look no further than this picture @barryhannah snapped at the gate out there today.

20120120-211652.jpg

Look familiar?

Clearly, the talent pool at BDO hq is now so shallow they think it’s ok to steal images from blogs, rather than source by legitimate means.

It’s amateur hour out there, folks. If they’ll break the law over something so trivial as an ID sign to save a few bucks, why would they stop short of ripping off artists and skimping on safety? My guess is they wouldn’t.

If you ask me, the kids staying away in droves this year have made a good call. Shame, as the Big Day Out used to be a hell of an event.

Social media usage following the Christchurch earthquake

A few random thoughts on some of the applications of social media following the Christchurch earthquake. I’ll attempt a more meaningful formulation soon.

In the first minutes, real-time tools like Twitter and TwitPic really came into their own. First the news broke that a major quake had occurred. Reports followed of its magnitude, and then the first pictures and videos started to come in. Within 15 minutes, #eqnz had become the accepted hashtag. Despite the massive fragmentation of sources, thousands of people all over the world were watching the story evolve in real-time, reading the same tweets and viewing the same images.

In those first minutes, the thing most people were concerned about was finding out what the hell was going on. As such, open and real-time were absolute requirements. Several people and organisations recommended Facebook pages as useful resources, but in this instance they were way off the mark. I don’t want to become a fan of (sorry, ‘like’) a page in order to report a missing pet or loved one, and I want my plea for help to have the widest possible audience – not just people who have also signed up to that page.

It was interesting to once again observe a kind of passive aggressive turf war between proponents of alternative hash tags. #eqnz emerged almost immediately, followed by the Earthquake Commission’s proposed #chch. Why the hell they felt the need to upset the apple cart is beyond me.

I’ve read a few posts lately with people bitching about the continued use of the now-defunct ‘RT @…’ method of retweeting. Personally I couldn’t see what all the fuss was about, and really didn’t give a toss one way or the other until yesterday. Looking through the #eqnz stream in search of new information was a nightmare, due to the thousands of RT-style retweets that would otherwise have appeared as a simple numerical increment appended to the original tweet. RT-style retweets enter the stream in the same fashion as the original, but with a later time stamp. When people are looking for real-time information, wading through thousands of hour-old (oh, how our expectations have changed) posts is a painful and unnecessary drag. I’ll never use the RT style again. Who’s with me?

It seems every man and his dog tried to get their favourite celebrities to retweet links to the NZ aid organizations, and many did so (Kudos to Simon Pegg, Nick Frost, Stephen Fry. Screw you, Oprah). I can’t help but wonder if these pleas were a genuine attempt to leverage star power in a time of need, or a sad form of 21st century autograph hunting. Nobody collects signatures anymore – it’s all about @ replies and retweets, don’t you know. Does it matter? No, but it’s interesting to me at least.

Within minutes of the first pictures of the collapsed Christchurch cathedral coming out, its Wikipedia page and image had been updated to reflect its current condition. Not only that, Wikipedia administrators had flagged the image as a candidate for removal due to its dubious copyright status. Wikipedia is such an efficient animal, especially in times like this (reminded of similar instances in the case of Steve Irwin’s death, and Pluto’s demotion to non-planet status). This never ceases to amaze and impress me.

Once the mainstream news organisations started to get a handle on this situation – for the first couple of hours they were essentially re-publishing information and images sourced from Twitter – social media usage seemed to shift into recovery mode. People wanting to locate missing loved ones were tweeting their names and possible locations, and people on the ground were attempting to find them. Several wiki and wiki-like projects kicked in, creating centralized registers of the missing and the found. Wikis were an ideal technology to use now, to balance out the noise and evanescence of the Twitter stream.

The utility of some of these digital tools also provided a way for concerned people all over the world to get involved. I looked across my office at one point and saw a colleague scanning the Twitter stream for reports of missing people, updating a missing persons’ wiki. Around a dozen people were working simultaneously. Where and who were the others? I have no idea. It doesn’t matter.

Misinformation is always a challenge with social media, and yesterday was no different. When reports came in of damage to the Christchurch cathedral, the accompanying pictures were actually of a different church that had been totally destroyed. Some well-meaning soul assumed blood would be in demand, and put the word out that donations were urgently required. The resulting flood of offers – not needed, thanks to regular donors such as myself – placed undue pressure on the Blood Service, who were forced to divert attention from their task at hand and respond with their own assurances that blood stocks were fine and dandy. It’s not all bad news though – because the misinformation was largely on Twitter, the Blood Service was prompted to create their own account (@nzblood) in order to join the conversation. Here’s hoping they stick around.

Misinformation is by no means the sole domain of social media. Last night I was appalled to see TV3 news anchor Hilary Barry announce ‘unconfirmed reports’ of a death toll as high as 300 to 400 people. Unconfirmed reports? Why not preface with ‘a bloke in the pub told me’? If this is what counts as journalistic integrity in the 21st century it’s no wonder the old media establishments are struggling. If I’m going to soak up a bunch of speculation and heresay I might as well get it for free and without a 15-second ad at the start.

TTFN…

Simplex methods and the problem of endless consumer choice

One of the more interesting projects I’ve worked on lately involves a large retailer looking to improve their online sales channel.

As it stands, their website covers a large number of brands, and within each brand there are many categories, sub-categories, products, and product variants.

For client confidentiality reasons I won’t name the company or industry, but it’s a common enough situation. For example, in the clothing industry there are brands, styles, men’s and women’s lines, and cut and colour variations.

Traditional approaches to helping customers explore deep and varied product offerings typically centre around hierarchical browsing, and search – entering keywords and/or selecting one or more criteria from the brand -> category -> sub-category -> product schema to generate a list of products that satisfy the specified criteria, and then evaluating each option in turn.

Many would argue that there is a reason that these two modes of exploration are the norm, and that there is no need to fix that which ain’t broke. Screw ’em.

While pondering the ifs and hows of improving on browse and search, it occurred to me that both are subject to valid criticism when it comes to efficiency. Regardless of whether you’re browsing a catalogue or reviewing search results, the act of looking at an individual product is useful for eliminating unsuitable options, but doesn’t bring you any closer to finding the product that best suits your needs. You may chance upon a product that ticks all your boxes and so you decide to stop searching, but there is no basis for knowing if your choice was optimal – if you’d looked at a couple more options you may have found a better product for half the money, but you’ll never know.

According to the Secretary Problem, the best you’ve seen after sampling 37% of the population is likely to be the right one for you, but this isn’t particularly helpful when there are thousands of products for you to chose from.

Either way, traditional search and browse methods are woefully inefficient. What we need is a mode of exploration whereby each time an option is evaluated, it significantly increases the odds that the next option we evaluate will be the optimal one.

It seems to me that researching a purchase is essentially a complex optimization problem, whereby consumers are looking to find a product that maximizes some attributes and minimizes others, within a set of constraints that includes factors such as cost.

Now, when there are very few options to explore, you can easily get away with assessing the value of each one, ordering highest to lowest and picking the one at the top of the list. It takes a little working, but we know this is the optimal choice because it’s the one that yields the greatest value within the specified constraints. There may be other choices, but all will provide less value and/or be outside your constraints (budget etc).

When the number of options is very large, however, this approach just doesn’t work due to the near-infinite number of product attribute combinations that need to be evaluated and ranked. The Travelling Salesman Problem is a great example of how a seemingly simple optimization problem can turn into a computational nightmare once you move beyond a few variables.

The Simplex Algorithm (devised by the great George Dantzig, a.k.a. Will Hunting) is quite possibly the most outlandish mindfcuk ever conceived by man. I managed an A in the Linear Programming paper I took in my final year as an undergraduate, and by necessity knew it well. I could (and did) do this stuff all day, right up until the day after the final exam. Then I needed to free up some brain power to resume vital functions such as breathing, preparing for post-graduate study, and playing Three Man, and promptly forgot the lot. Yet I digress…

Loosely speaking, the simplex algorithm is a very efficient mathematical technique for solving complex optimization problems. Rather than evaluating and ranking all possible alternatives against a stated objective function, the simplex algorithm runs along the lines of:

Start out with a basic feasible solution (visually thinking, this would be a point on the boundary of an n-dimensional shape formed by plotting constraints with n variables, see image below). From this point, look at the slope of your objective function. If it’s decreasing (for reasons I won’t go into, optimality is achieved when the objective function is minimized), go to the adjacent point that shows the greatest decrease in the objective function. Keep doing this until you reach a point where the objective function is no longer decreasing. This is the optimal solution.

A system of linear inequalities defines a polytope as a feasible region. The simplex algorithm begins at a starting vertex and moves along the edges of the polytope until it reaches the vertex of the optimum solution.
A system of linear inequalities defines a polytope as a feasible region. The simplex algorithm begins at a starting vertex and moves along the edges of the polytope until it reaches the vertex of the optimum solution. (Source: Wikipedia)

I believe that a similar approach would work wonders if applied to the problem of consumer choice. What if, instead of expecting a website visitor to browse an entire product offering or wade through hundreds (if not thousands) of search results, we instead got them quickly to a single ‘best bet’ product to evaluate (a ‘basic feasible solution’ that meets their criteria but may or may not be the optimal one), and used that as an anchor of sorts, to guide them towards finding the optimal product to buy?

In practice, it could work like this:

Get the customer to provide an indication of what they’re looking for. Instead of an ordered list, return a ‘full details’ view of a single product, accompanied by a limited set of related products that also match the customer’s criteria (the adjacent points on the n-dimensional spaced bounded by the customer’s constraints). Ask the customer to indicate which of the related products is closer to what they were looking for than the product they are viewing now. Repeat until the product being displayed is better than the alternatives on display, in which case we know that the product on display is the one the customer ought to buy.

Now, I know there’s a ton of mind-numbingly complex calculation behind this stuff and I’m not suggesting that a literal application of the simplex algorithm is what’s required here. But as an alternative approach to the problem of helping prospective customers to find the product that best suits their needs, and giving them comfort in the optimality of their selection to get them over that pre-purchase hump, I think there’s a lot to be gained from considering this idea – particularly as an alternative to the traditional, lazy, and costly (in that many people just give up and leave) approach of presenting a product catalogue or list of search results and leaving them to their own devices.

I don’t know, maybe I’m just going nuts. What do you guys think?

*Afterthought*

What if Google used this approach? Instead of delivering a list of sites, a search would take you directly to the site Google figured is the best answer to your query, with the next-best sites shown in an overlay frame, allowing you to continue your search by hopping from one site to the next (the set of recommendations improving with each click) without having to return to the results page. Possibly not great from a revenue perspective, but as a user it’d be pretty cool, no?

You guys are sick!

The list below contains the top 50 search phrases that have delivered visitors to this blog since August 2006. If nothing else I guess it’s proof positive of rule 34, and its corollary – ‘if porn exists, no matter how obscure or depraved the subject matter, there is some sick bugger out there looking for it’.

I’m not sure what’s worse – that you guys are looking for this stuff, or that Google thinks I have it…

Read moreYou guys are sick!

Courage Under Fire

It’s a shame Mel Gibson turned out to be such a douchebag. A few obvious train wrecks aside, he’s had a hell of a career and played some interesting characters. One of my favourites was his portrayal of Lt. Gen. Hal Moore, in We Were Soldiers.

Hal Moore is something of a legend in the US military, both as a hardened combat veteran and an inspirational leader of men. I’ve read a number of pieces by and about him over the years, and recently came across an audio clip of his – Four Principles for a Leader’s Conduct in Battle.

[soundcloud params=”auto_play=false&show_comments=false” height=”80″ url=”http://soundcloud.com/stu-3/halmoore”]

When I first heard this clip I was reminded of some of the many excuses I’ve heard over the years from clients who understood and appreciated the importance of social media but weren’t ready to take the plunge – it’s too risky, too complex, too expensive, not a priority right now, we don’t have the time, the comms team won’t allow it, my dog ate it… Sound familiar?

It’s hardly an original idea to suggest that the wisdom of military leaders has relevance in the business world – business sections of bookstores are packed with analyses of Musashi, Sun Tzu and Machiavelli, and that’s just for starters. But for what it’s worth, here are a few thoughts I’d offer the keen but reluctant, would-be social media marketer, based on Moore’s observations.

1. Three strikes and you’re not out: No matter how well you plan and prepare, there will always be unpleasant surprises. A great idea will miss the mark and flop. Someone on your team will say or do something stupid. You will encounter haters, trolls, and the genuinely unimpressed. This is all normal. It’s something we all face. Don’t let it get to you.

2. There is always one more thing you can do to influence any situation in your favour: You don’t have to jump in with both feet and do everything at once. In fact, doing too much at once is pretty dumb. It’s better to do something than nothing, so start out by doing one thing and doing it well. And when you get the hang of that try something else, and so on and so on. I’ve long been a fan of agile approaches to strategy and planning, and I still can’t fault the logic: The easiest way to eat an elephant is one bite at a time, starting with the tasty bits and leaving the asshole till last.

3. When nothing’s wrong, the only thing that’s wrong is that nothing’s wrong: Per #1, above, things invariably go wrong at times. Problems that sneak up on you are harder to deal with, so you’d better be on the lookout. Listening and analytics tools like Radian6 and Omniture may seem expensive, but will save you a whole lot of hurt (provided you actually do something with the insight you gain from them). Trust me on this.

4. Trust your instincts: Much of this stuff we call social media is ephemeral. When the opportunity to say or do something arises, it won’t be there for long, so it’s important to develop the ability to act quickly and appropriately – a quality I sometimes refer to as ‘digital wit’. This requires training and resourcing (human, financial and infrastructural), and more than a little trust. By trust we’re not just talking about providing a little latitude from corporate communication protocols. We also need to willingly accept that some of the things we try will fail, and that’s ok. Rather than retrenching when things don’t go according to plan, we face up to the facts, deal with them, and move on.

And I guess if that doesn’t work you can always show ’em your war face.

Tracking Hosted WordPress Blogs With Google Analytics

The only thing that really bugs me about using a WordPress-hosted blog is the auto-removal of Javascript from posts and template elements (widgets). They say this is for security reasons, and since I haven't paid dollar-one for four years' use of what is actually a very good blogging platform, I'm not going to make too much of a fuss about it.

Much of the day-to-day stuff that would require Javascript can be done using custom tags and widgets – embedding video, Flickr galleries, RSS feeds and the like. Analytics, on the other hand, is kinda frustrating. The WordPress forums are full of people wanting to know how to deploy Google Analytics to hosted WordPress blogs, and the answer is always the same – you can't. GA requires the embedding of a piece of Javascript on every page. WordPress hosted blogs strip out Javascript. Thus, Google Analytics is a no go.

Or is it?

I found this site a few weeks ago, while looking into some Facebook reporting issues – a rather simple app that allows you to embed a hosted image in a Facebook page, and which runs the Google Analytics tracking script each time that image is requested. On a whim, last night I thought I'd see if it worked with my WordPress blog, and whaddya know – it did!

It's far from perfect – for example, each unique image can only report on one page at a time (i.e. the whole blog is counted as one page), as Javascript would be required to dynamically pull in the filename and post title – but it's a hell of a lot richer than the standard WordPress blog stats. If you want to know what your most popular posts are, you can get that info from WordPress, but for everything else – unique visitors, top referrers, visitor demographics etc – Google Analytics seems to do just fine.

It's early days, and time will tell if the data I'm seeing is accurate or useful (hopefully both). Looking good so far though. Give it a try and let me know how you get on, hey?

Tracking image generator: http://ga.webdigi.co.uk/

*Update*

I’ve had a few people ask about implementation, so here goes:

  1. Create a new Google Analytics profile for your hosted WordPress blog
  2. Grab the Google Analytics ID for your new profile (looks like this UA-3123123-2)
  3. Complete the form here – http://ga.webdigi.co.uk/ – to generate your tracking image code. Copy this to your clipboard.
  4. Log in to your WordPress console, and then go to Appearance -> Widgets
  5. Create a new ‘Text’ widget, and drag it to your sidebar.
  6. Paste the tracking image code into the body of the text widget. Leave the title blank.
  7. Save and close the text widget.
  8. Done!