NEC brings translation up close and personal

telescouter_ph04

Those who saw the last series of Dr Who spin-off Torchwood will remember the team using a contact lens which both housed a camera and displayed text for the wearer to read.

That’s not quite possible in reality yet, but NEC in Japan has come up with something similar: a pair of spectacles with a built-in camera and a projector which beams images onto the wearer’s retina. And eventually it could even act as a translation device.

The initial version of Telescouter, due for release next year, will allow the wearer to see text and images without the need for a large screen. Among the possible uses are to allow technicians to see the instructions for repairing a machine even when in a position where referring to a printed manual simply isn’t practical, such as when leaning into the depths of a photocopier.

It may also be used by sales staff so that they can refer to data about a customer’s past acitivity while holding an undisturbed conversation with them (or at least as undisturbed a conversation as you can have with somebody with a mini-projector clipped on to their spectacles).

If you’re liking the look of this so far, I’m afraid to tell you it might be a bit ambitious to put it on your Christmas list for 2010. The initial release is likely to be as a system for 30 users, with a total price of 7.5 million yen, or just over $80,000.

(Some reports have put the cost at 750 million yen, or $8 million, for the 30 pairs. However, while $2,666 a time certainly sounds feasible for a pair of high-tech specs, it seems pretty ludicrous to imagine an engineer or sales rep being sent off on the road wearing a pair costing more than a quarter of a million dollars.)

Things get even more exciting with NEC’s plans for 2011. They expect to release a second version of the spectacles with the addition of a microphone and headset. The mic will pick-up conversations and send the audio to a small waist-mounted computer which will route it to a remote server. The server will use a combination of voice recognition and automated translation software to come up with a translation. This is then not only relayed to the user’s ear through speech synthesis, but also displayed on their retina as subtitles.

To work properly, of course, both parties in a conversation would need to wear the spectacles. However, the idea is that this wouldn’t just be a gimmick and instead could be used for confidential negotiations where firms wouldn’t trust a human interpretor.