Too Much TV Can Dull Your Brain

ThinkStock
View Caption +
It's easy to mistake this photo for some kind of surreal landscape painting, but this image in fact shows off the imagination of Google's advanced image detection software.


View Related Gallery »

Gallery

See the Dreams of an Artificial Brain: Photos

It's easy to mistake this photo for some kind of surreal landscape painting, but this image in fact shows off the imagination of Google's advanced image detection software. Similar to an artist with a blank canvas, Google's software constructed this image out of nothing, or essentially nothing, anyway. This photo began as random noise before software engineers coaxed this pattern out of their machines. How is it possible for software to demonstrate what appears to be an artistic sensibility? It all begins with what is basically an artificial brain. When Art Meets Science: Photos

Google

View Caption +

Artificial neural networks are systems consisting of between 10 and 30 stacked layers of synthetic neurons. In order to train the network, "each image is fed into the input layer, which then talks to the next layer, until eventually the 'output' layer is reached," the engineers wrote in a blog post detailing their findings . The layers work together to identify an image. The first layer detects the most basic information, such as the outline of the image. The next layers hone in on details about the shapes. The final output layer provides the "answer," or identification of the subject of an image. Shown is Google's image software before and after processing an image of two ibis grazing to detect their outlines. How Face Recognition Tech Will Change Everything

Google

View Caption +

Searching for shapes in clouds isn't just a human pastime anymore. Google engineers trained the software to identify patterns by feeding millions of images to the artificial neural network. Give the software constraints, and it will scout out patterns to recognize objects even in photos where the search targets are not present. In this photo, for example, Google's software, like a daydreamer staring at the clouds, finds all kinds of different animals in the sky. This pattern emerged because the neural network was trained primarily on images of animals. Cloud-Gazing: Learn Your Cloud Types

Google

View Caption +

How the machine is trained will determine its bias in terms of recognizing certain objects within an otherwise unfamiliar image. In this photo, a horizon becomes a pagoda; a tree is morphed into building; and a leaf is identified as a bird after image processing. The objects may have similar outlines to their counterparts, but all of the entries in the "before" images aren't a part of the software's image vocabulary, so the system improvises. Facial Recognition System Detects Pain

Google

View Caption +

When the software acknowledges an object, it modifies a photo to exaggerate the presence of that known pattern. Even if the software is able to correctly recognize the animals it has been trained to spot, image detection may be a little overzealous in identifying familiar shapes, particularly after the engineers send the photo back, telling the software to find more of the same, and thereby creating a feedback loop. In this photo of a knight, the software appears to recognize the horse, but also renders the faces of other animals on the knight's helmet, globe and saddle, among other places. Photo First: Light Captured as Both Particle and Wave

Google

View Caption +

Taken a step further, using the same image over several cycles in which the output is fed through over and over again, the artificial neural network will restructure an image into the shapes and patterns it has been trained to recognize. Again borrowing from an image library heavy on animals, this landscape scene is transformed into a psychedelic dream scene where clouds are apparently made of dogs. Plants Thrive in Psychedelic, Underground Farms

Google

View Caption +

At its most extreme, the neural network can transform an image that started as random noise into a recognizable but still somewhat abstract kaleidoscopic expression of objects with which the software is most familiar. Here, the software has detected a seemingly limitless number of arches in what was a random collection of pixels with no coherence whatsoever. Digital 'Head Dome' Immerses You in Art

Google

View Caption +

This landscape was created with a series of buildings. Google is developing this technology in order to boost its image recognition software. Future photo services might recognize an object, a location or a face in a photo. The engineers also suggest that the software could one day be a tool for artists that unlocks a new form of creative expression and may even shed light on the creative process more broadly. New Google Initiative Targets Classical Music Lovers

Google

More bad news for lying supine on the sofa and watching television: doing a lot of it as a young adult is linked with worse cognitive function later in life, according to new research.

Specifically, young adults who were physically inactive, as measured by duration and intensity of exercise, and watched three or more hours of TV each day had slower cognitive processing and poorer executive functioning 25 years down the road.

All manner of research has been done on the dangers of TV viewing, from looking at how it affects children’s brain developmentto how it raises the risk of type 2 diabetes. While other recent researchhas connected too much TV and increased risk for Alzheimer’s, relatively few studies have looked at its long-term cognitive effects.

“Participants with the least active patterns of behavior — i.e., both low physical activity and high television viewing time — were the most likely to have poor cognitive function,” note the researchers.

In an era of superlative television programming — and whole seasons being made available at once, enabling marathon viewing sessions — these findings are a good reminder to limit TV time.

Your future self, and her whip smart brain, will thank you.