The New PUMA Fuseproject – Packaging as Branding

Really nice work by Yves Behar for Puma to reduce packaging waste.

“Rethinking the shoebox is an incredibly complex problem, and the cost of cardboard and the printing waste are huge, given that 80M are shipped from China each year,” Béhar tells “Cargo holds in the ships can reach temperatures of 110 degrees for weeks on end, so packaging becomes an enormous problem. This solution protects the shoes, and helps stores to stock them, while saving huge costs in materials.”

The impact: Puma estimates that the bag will slash water, energy, and fuel consumption during manufacturing alone by 60%—in one year, that comes to a savings of 8,500 tons of paper, 20 million mega joules of electricity, 264,000 gallons of fuel, and 264 gallons of water. Ditching the plastic bags will save 275 tones of plastic, and the lighter shipping weight will save another 132,000 gallons of diesel.

There’s no doubting the Green credentials and I was gonna get more wax lyrical. But then I read the comments on YouTube. Check out these 2 gems:

DJHELLO 21 months? To replace a box with a bag? I think Yves saw you coming

FCule 21 FUCKING MONTHS?!?!? Are you f kidding me.. jesus christ



Posted via email from hellokinsella’s posterous

Our Path To Truly Rich, Personalised Video Experiences

Dom wrote this feature for the 1st anniversary of the rebranded Revolution magazine. This is a copy+paste of the expanded version posted on his blog.

It gives you a glimpse into a some of the projects I’ve worked on at glue, and the technologies we’re looking into at the moment.

Little did I know it at the time, but a project for Mars’ sponsorship of Euro 2006 was the catalyst for a new approach to personalised video content here at glue.

What we did was crude and simple: we allowed people to create a fan by choosing a head, body and hands. These individual assets existed as PNGs on the server and depending on what was chosen, a JPEG was created using ImageMagik. Thinking not too much more about it, we moved on to the next project.

A year later our Get The Message recruitment campaign for the Royal Navy was born:


We quickly realised that the audience likely to want to recruit weren’t exclusively those behind PC’s all day. In fact the bulk of them weren’t. For this audience the only real channel available at scale was mobile.

The problem was we’d become experts in interactive video using Flash, but Flash wasn’t (and broadly still isn’t) compatible with many handsets. The file format of choice was / is MPEG video so we needed to replicate the browser experience using it.

We scratched our heads and fairly quickly came round to the idea that if we can create individual JPEGs on the fly, stitching them together would create video. So that’s exactly what we did – this time combining ImageMagik with FFMPEG.

The video message is delivered as an SMS. The recipient downloads and watches the video, and also has the ability to respond direct on handset:

Get the message mobile video from Dom O’Brien on Vimeo.

At the time this was a first and we all felt pretty happy and gave ourselves a slap on the back like only the ad industry can. But almost naively, and for a second time, we’d stumbled on the door to a much bigger opportunity:

Replicating the flash experience had fulfilled the requirements of this project, but we soon recognised that by automating motion graphics or 3D packages it’s immediately possible to generate video without creative limits.

Enter DYNAMIC VIDEO (a phrase we’ve banded about the agency for a few years now that REALLY needs a better name…)

Whilst traditional video is shot with a camera and broadcast, dynamic video allows for content to be generated specific to the person watching it, at the moment of viewing.

To help understand this complex concept, think about the gaming world where a game is produced but each game-play is unique to the actions of the game player. With dynamic video the same is now true for brand experiences.

Here’s one such example we created in 2008 for Bacardi using their existing endorsement of UK beatboxing champion Beardyman.

The project was initiated by the simple thought, ‘wouldn’t it be great if everyone could beatbox as well as Beardyman.’ And from there a project was born.

It’s a simple upload your face mechanic, using Kofi Annan here for the purposes of demo:

Bacardi Beatology from Dom O’Brien on Vimeo.


There’s all sorts of complex things going on under the bonnet.

There’s proprietary image recognition software interpreting the uploaded photo, identifying facial elements and stripping it out from its background (no need for manual intervention).

Then using 3DMax the video is generated by mapping the face texture onto existing wireframe animations.

This technique has 2 immediate benefits:

1. Visually pretty much anything is possible (at least anything that’s possible within motion graphics or 3d applications)

2. The generated file is the ubiquitous MPEG – enabling distribution across channels without the need to re-engineer

However the technique is fairly processor intensive – taking around 20 seconds per person to generate. This gives a through-put of 4,320 videos per processor per server per day. Whilst this is ok on a smallish campaign, the only real thing you can do for larger ones is throw more hardware at it which can be costly and only becomes viable once a client really values what is creatively being achieved.

The emergence of cloud computing farms and the rendering capacity these offer to an extent solves this issue, but it’s early days. These cloud farms not only offer scalable rendering capabilities, but with the proliferation of smaller devices in all our pockets, enable richer experiences to be created remotely and be viewed on device.

Another sector dabbling in using cloud farms in this way is the few virtual rendering games companies that have recently emerged, which negate the need for a console by rendering content virtually and bringing it into home via your broadband. (can our broadband really cope with realtime 1080p video content? Or is this partly the reason these services haven’t yet taken off). Definitely one to keep an eye on.

As is the recent emergence of open source video specific rendering farms like PandaStream.

Or potentially the answer is in not saving the generated video to file, but rather to dynamically construct the video within stream as done here:


It’s a neat solution, but the SDK means the production process is alien to existing skill sets in the short term.

So generally speaking it would be fair to say there’s lots of trial and error needed. And I can’t help but notice the aforementioned gaming industry is set on collision course with the digital industry – both attacking a similar problem but from different angles. This is a most exciting prospect. (Here’s the closest example of the two together I’ve seen to date).

In the mean time it would be great to think that the Adobes’ of this world, or maybe more likely the hardware guys of the world like nVidia or AMD move into this space and create a tool to ease the production process, but until they do these experiences will be built by ingenuity in combining niche technologies together to the needs of the project.

It therefore becomes apparent that to stay ahead of competitors R&D can’t be undervalued. The same goes for having the time and freedom to explore, trial and learn new technologies and techniques on paid for work. As we’ve testified here, bits of work that at the time may not seem like much, may in the future prove to be invaluable by re-emerging as a wholly different entity.

So collectively we (the industry) have come to a juncture where new creative opportunities exist. With this brings the need for internal re-education both on how we approach briefs conceptually, but also in how we approach capturing the assets in a new way that enables them to be manipulated with these techniques.

And with an eye on the future: glue recently ventured into the world of TV. I for one am really excited at the prospect of the day that the archaic TV broadcasting infrastructure is modernised and we can apply our digital know how onto the currently stagnant format. It defies belief that everything is still run from BetaMax. Admittedly I don’t know the setup intimately, but I’d have thought all it needs is for systems to be driven by an internet enabled computer – which happens on occasion, but not enough.

Here’s another more dynamic example that the clever boys and girls at MiniVegas were able to negotiate for a special short term deal for S4C a few years ago:

S4C ident by Minivegas from Dom O’Brien on Vimeo.

We’re undoubtably in exciting times, and hats off to the team here driving all of this forward @SuperScam @BananaFritter @hellokinsella

Bring on the next project..

Proof of Concept: The Brain To Brain Internet

YouTube link.

This is a BCI experiment whereby one person uses BCI to transmit a series of digits over the internet to another person whos computer receives the digits and transmits them to the second users through flashing an LED array. The encoded information is extracted from the brain activity of the second user.

This shows true brain-to-brain activity. This is done as a proof of concept – to show that B2B *is* possible – which it is, as we show here.

It doesn’t look particularly glamourous but what they’ve done is pretty amazing.


Augmented Reality. Texture Extraction Experiment

Augmented Reality Texture Extraction Experiment from Lee Felarca on Vimeo.

This is an AR-based experiment that enables the user to lift textures from real-world objects in live video and apply them onto 3D objects that are overlayed on top of them.

Only box primitives are supported here, but the general idea could be extended to other types of 3D primitives or potentially even more complex objects with some clever image compositing and UV mapping.

See blog post for more info, and a live version of the demo:

Some Cool Visual Interface Stuff

Computer vision technologies for manipulating digital interfaces with gestures are already here and mature; now it’s simply a matter of designing creative applications around them. And as with many design tasks, presentation is everything.

ActionScript programmer Peter Kaptein has done some brilliant creative work to mimic the infamous gestural interface in the film Minority Report using only Flash, FLAR toolkit, a webcam, a printer.. and his fingers.

YouTube link.

Jeremy Bailey created this project to promote the programmes and services of Squeaky Wheel Media Arts Center. He wrote his own physical interface presentation software from the ground up to help me accomplish this task. This video is the result. Mad.

YouTube link.

Chris O’Shea found this demo of the upcoming game Your Shape for Wii. The camera vision itself is fairly simple, but the software is being clever about mapping foreground motion to some pre-determined 3D model of the body.

And here’s the Your Shape trailer.

YouTube link.

Chris also wrote this fascinating post at the mysterious controller for Project Natal. Worth a read.

via Create Digital Motion.

Radiohead Wall of Ice


This is a Contagious Magazine article about Radiohead and their continuing experimentation with technology and music.

Dear me, this is clever. Radiohead have now entered the next phase of a long-running metamorphosis, from mere musicians to expert manipulators of the technologies and narratives that underpin the internet.

Two years ago, they released a full length album (‘In Rainbows’) online and asked fans to pay what they thought it was worth. This weekend, they kept the indie masses hanging in anticipation of a new EP, one week after frontman Thom Yorke’s declaration that ‘doing another album would kill us’ was reported everywhere from Twitter to NBC (further proof that the band have transcended the norms of musical notoriety).

By seeding cryptic content and allowing the fans to do the digging, Radiohead were effectively running their own Alternate Reality Game: a treasure hunt in which collaborators work together to solve a series of clues and get to the end of the game.

This is bloody complicated, so we’re giving it to you in list format. Hold onto your hats!

1) What appeared to be a new Radiohead track was posted to file sharing site The track, ‘These Are My Twisted Words’, quickly spread, racking up something like 200,000 views on YouTube alone in three days. (Note: this does not take into account actual downloads of the track. Listen here.)

2) With the audio file on was what is know as an .nfo file: a small text document which contains information about the crew that released it as well as the band itself. This particular .nfo file not only listed Radiohead as the artist, it also contained the following cryptic few lines: ‘i just wanted to reassure readers that following representations / seeking confirmation / that before your very eyes / behind the wall of ice / that the box is not under threat / however they are set to remove / other boxes / in fact i have the list in front of me / i went to a briefing on their plans / and challenged them to tell me / exactly what the cost would be’ (Check the ascii out here.)

3) The ‘Wall of Ice’ referred to in the description and further down the note was taken to be a reference to a cartoon from popular webcomic xkcd, in which a stick figure announces: ‘Dear Sony, Microsoft, the MPAA, the RIAA, and Apple: Let’s make a deal. You stop trying to tell me where, when and how I play my movies and music, and I won’t crush your homes under my inexorably advancing wall of ice.’ (See it here.)

Knowing how fond Radiohead are of digital rights management (not at all), this was taken by excited fans as evidence of a new EP entitled ‘Wall of Ice’ to be released the following day, on the 17th August. The more eagle-eyed had spotted this date in the .nfo file. (NB. This is also the day that Radiohead’s old and vilified label EMI had decided to rerelease the band’s back catalogue in the name of callous profiteering. Coincidence? Probably not.)

4) Fans note the URL points to Radiohead’s own W.A.S.T.E. site. Several music journalists hyperventilate.

5) A slight abate in momentum when it is realized that had been hastily registered by some random in the Netherlands, and may in fact have nothing to do with the band. Excellent opportunism from the Dutch.

6) A swift return to hysteria when an odd image appearing to represent two twisted trees appears on W.A.S.T.E. The image is also clearly recognizable in the text document accompanying the file download.

7) Radiohead guitarist Jonny Greenwood pops up on to announce the arrival of the new song, the one that everybody had already. ‘There’s other stuff in various states of completion, but this is one we’ve been practicing, and which we’ll probably play at this summer’s concerts.’ Boo!

8) However! The zip file which Radiohead have released contains not only the song and the digital credit sheet, but also an album cover PDF. The PDF is a sequence of several images (including the twisted trees) and comes with these directions:

‘This is an artwork to accompany the audio file. We suggest you print these images out on tracing paper. Use at least 80gsm tracing paper or your printer will eat it as we discovered. You could put them in any order that pleases you.’

What does this mean? There is more symbolism in the images? That more tracks can be unlocked? That Radiohead are on a mission to rid the world of printers through the canny destructive mechanism of tracing paper?


However, we do know this. ‘Wall of Ice’ is not only an exercise in crowd manipulation, it’s a genuine acknowledgement of the way in which the music industry now works, and one in the eye to the archaic and crumbling systems from which Radiohead have struggled to liberate themselves. The band is fast becoming as synonymous with technological mischief as they are with music, and for that. we can only salute them.

As one blogger put it, ‘they make it fun to be a nerd’.

What do you think?

Not bad eh. Leagues ahead of most.

Copy + paste via Contagious Magazine.