2024

dai mondi virtuali in avanti

I futuri ambienti virtuali: Mirror Worlds e dispositivi personali per l’Augmented Reality

Tish Shute (Tara5 Oh in Second Life) è uno dei miei punti di riferimento per mondi virtuali e mirror worlds, realtà aumentata (augmented reality), dispositivi, sensori e in generale gli argomenti che tratta questo blog. Gli approfondimenti da lei pubblicati su Ugotrade sono imperdibili – gli intervistati sono persone di rilievo in questi settori e le riflessioni che ne risultano sono spesso molto interessanti.

Dall’articolo Mobile Augmented Reality and Mirror Worlds: Talking with Blair MacIntyre riporto più giù alcuni estratti – non li traduco, ma ecco una sintesi dei concetti principali. Immaginate che queste siano caratteristiche comuni del nostro ipotetico 2024.

  • presenza pervasiva degli ambienti virtuali
  • negli ambienti virtuali si partecipa con modalità/strumenti/piattaforme differenti, ma…
  • …i partecipanti vengono rappresentati con una modalità comune, e…
  • …tutti i partecipanti sono in grado di manipolare i contenuti
  • molti dei contenuti che creiamo sono abbinati a coordinate geografiche reali
  • i modelli che compongono l’ambiente virtuale 3D sono sintetizzati in modo semiautomatico da fotografie e altri dati raccolti dalle persone
  • i dispositivi portatili sono normalmente in modalità augmented reality cioè quando li rivolgiamo verso un soggetto, forniscono immediatamente informazioni su di esso

folks in highly instrumented team rooms will collaborate in one way, and their activity will be reflected in the virtual world;  remote participants (e.g., those at home, or in a cafe or hotel) may control their virtual presence in different ways, but the presence of all participants will be reflected back out to the other sides in analogous ways.  We may see ghosts of participants at the interactive displays, or hear their voices in 3D space around us; everyone will hopefully be able to manipulate content on all displays and tell who is making those changes

[…]

it’s a matter of time till more of what we “create” (e.g., Tweets and blog posts and so on) are all geo-referenced; these will become the information landscape of the future

[…]

you can start building the models of the world in a semi-automated way from photographs and more structured, intentional drive-by’s and so on. So I think it’ll just sort of happen. And as long there’s a way to have the equivalent of Mosaic for AR, the original open source web browser, that allows you to aggregate all these things. It’s not going to be a Wikitude. It’s not going to be this thing that lets you get a certain kind of data from a specific source, rather it’s the browser that allows you to link through into these data sources.

So it’s that end that interests me. It’s questions like “what is the user experience”, how do we create an interface that allows us to layer all these different kinds of information together such that I can use it for all my things. I imagine that I open up my future iphone and I look through it. The background of the iphone, my screen, is just the camera and it’s always AR. I want the camera on my phone to always be on, so it’s not just that when I hold it a certain way it switches to camera mode, but literally it’s always in video mode so whenever there’s an AR thing it’s just there in the background.

[…]

Wrobel wrote, “The AR has to come to the users, they cant keep needing to download unique bits of software for every bit of content! We need an AR Browsing standard that lets users log into an out of channels (like IRC) and toggle them as layers on their visual view (like Photoshop).Channels need to be public or private, hosted online (making them shared spaces) or offline (private spaces). They need to be able to be both open (chat channel) or closed (city map channel) as needed. Created by anyone anywhere. Really IRC itself provides a great starting point. Most data doesn’t need to be persistent, after all. I look forward too seeing the world though new eyes.I only hope I will be toggling layers rather then alt+tabbing and only seeing one “reality addition” at a time.”

Bonus video di oggi: ARhrrrr

ARhrrrr is an augmented reality shooter for mobile camera-phones, created at Georgia Tech Augmented Environments Lab and the Savannah College of Art and Design (SCAD-Atlanta). The phone provides a window into a 3d town overrun with zombies. Point the camera at our special game map to mix virtual and real world content. Civilians are trapped in the town, and must escape before the zombies eat them! From your vantage point in a helicopter overhead, you must shoot the zombies to clear the path for the civilians to get out. Watch out though as the zombies will fight back, throwing bloody organs to bring down your copter. Move the phone quickly to dodge them. You can also use Skittles as tangible inputs to the game, placing one on the board and shooting it to trigger an explosion.

Mobile Augmented Reality and Mirror Worlds: Talking with Blair MacIntyre
Annunci

Archiviato in:Previsioni generali, , , , ,

One Response

Rispondi

Inserisci i tuoi dati qui sotto o clicca su un'icona per effettuare l'accesso:

Logo WordPress.com

Stai commentando usando il tuo account WordPress.com. Chiudi sessione / Modifica )

Foto Twitter

Stai commentando usando il tuo account Twitter. Chiudi sessione / Modifica )

Foto di Facebook

Stai commentando usando il tuo account Facebook. Chiudi sessione / Modifica )

Google+ photo

Stai commentando usando il tuo account Google+. Chiudi sessione / Modifica )

Connessione a %s...

%d blogger hanno fatto clic su Mi Piace per questo: