![]() That’s where AI and ML step in, filling in the gaps in data, not entirely dissimilarly from the way the Pixel approximates backgrounds with tools like Magic Erase - albeit with a three-dimensional render.Īfter my interview subject - a member of the Project Starline team - appears, it takes a bit of time for the eyes and brain to adjust. The trick here is developing a real-time 3D model of a person with far fewer camera angles. The team has reduced the number of cameras down from “several” to a few and dramatically decreased the overall size of the system down from something resembling one of those diner booths. One of the key differences between this Project Starline prototype and the one Google showed off late last year is a dramatic reduction in hardware. It appears to be both a privacy setting and a chance for the system to calibrate its subject. A soft, blurry figure walks into frame and sits down, as the image’s focus sharpens. The all-too-brief seven-minute session is effectively an interview. They look a bit like Kinects in that way all modern stereoscopic cameras seem to. There are three camera modules on the screen’s edges - on the top and flanking both sides. In front of you is what looks like a big, flat screen TV.Ī lip below the screen extends out in an arc, incased in a speaker. You walk in and sit down in front of a table. Just me in a dark meeting room on the Shoreline Amphitheater grounds in Mountain View. ![]() Google had a strict “no photos, no videos” policy in place. I don’t have any images from my Project Starline experience.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |