UX and IxD issues in VR.

In our research, we have examined and detailed diverse categories around interactive narratives, within the context of user engagement through technology.

One of the topics that I consider essential but we could not develop in depth is the design of environments in virtual reality (VR). This is imperative in the current state of VR, where there are no realistic tactile simulations yet; and even when technology allows to experience the sense of touch, the logic of the simulated environments leads into a new paradigm of user interface design and user experiences. These issues will often determine the agency; that is to say, “the ability to act in any given environment.” (WizardofAz and Balabanian, 2016). In other words, they define who is the user within the virtual reality.


(Source: Archive.org Image by NASA)

The most important question that arises is, in a VR design where users interact with an object that does not exist in the physical world, how we apply the principles of design. How do we construct affordances or constraints on objects that can not be touched? Or how we interact with an impalpable thing? From the design point of view seems very challenging.“If an object appears intangible, people have no mental model for it, and will not be able to reliably interact with it.” : (Motion, 2016).

This leads us to think the interaction in a further complex way, realising that other elements could improve or enhance the user experience, and compels us to consider how some principles of design can be changed within these virtual environments. For example in VR is being very common the use of audio to reinforce the possibilities of virtual objects. Similarly, the designers are modifying certain characteristics of the objects, such as colour or size, to improve the functionality, they also are transforming the visual perspective to delimit the potentialities of the experience. Even in some cases, we are seeing variations on physical laws; for instance, heavy objects appear lighter, or gravity could alter; thus to raise and strengthen the usability of the interactive systems.

These new relationships, not only affect the paradigms of design but also establish new semantic links, finding new meanings and possibilities in the virtual storytelling. In VR “seeing is doing”.  (In this regard is interesting the research “the Embodied Montage in VR.” Tortum, 2016). These issues highlight the need to think VR as a new medium, reconsidering the User Centred Design (UCD) approach within the context of VR.


-Motion, L. (2016) Designing physical interactions for objects that do not exist. Available at: http://blog.leapmotion.com/designing-physical-interactions-for-objects-that-dont-exist/ (Accessed: 11 February 2017).
-WizardofAz and Balabanian, A. (2016) Cause & Effect- VR’s essential interaction. Available at: https://medium.com/@WizardofAz/cause-effect-vr-s-essential-interaction-efff0471b470#.lflqoon7w (Accessed: 11 February 2017).
-Tortum, H., (2016) Embodied Montage: Reconsidering Immediacy in Virtual Reality. Available at: http://cmsw.mit.edu/wp/wp-content/uploads/2016/10/326754928-Deniz-Tortum-Embodied-Montage-Reconsidering-Immediacy-in-Virtual-Reality.pdf (Accessed: 11 February 2017).
-NASA, VR, viewed 11 February 2017, <https://s-media-cache-ak0.pinimg.com/236x/b5/66/6e/b5666eef4287f92456492e618a28f04e.jpg&gt;.

Open Frame Works

Four exercises around my REFLECTIONS ON REAL TIME 1 AND 2.

Since the end of the 60s, when scientists and artists started to develop experiments and artistic practices using technology, a new concept emerges: “Real Time”. Pioneering researchers such as Engelbart or Shuterland, defined it in their theoretical essays, trying to help users who deal with the long-drawn-out time imposed by the perforated cards.“Using a computer in real-time working in association with a human to improve his effectiveness” (Engelbart, 1962)

More: Jorge_16021835_Assig1

ENGELBART, D. (1962). Augmenting human intellect. A conceptual framework. [Online] Available at: https://www.dougengelbart.org/pubs/augment-3906.html (Accessed: 12 February 2017).

Fun Machine – App

Our team consists of Parijat Bhattacharya, Jorge Caballero and Arpan Mitra. Our app is called “Fun Machine” and it’s aimed towards children between the ages of 5-8.

About Fun Machine:

“Fun Machine” is an app for children between 5-8. Most of them are often bored since they have nothing entertaining to do. Simultaneously, there is also a need for children to explore their creative side rather than being just a passive viewer of entertainment.

Keeping these factors in mind, our goal was to create an app that could entertain bored children by stimulating them to create interesting things. We tried to do this by having them make use of all the random things that people often have around the house.

For the FTUE, we have focused on the ease of use as we want our app to be used by children who are not seasoned technology users. Any child should be able to do it without any hassle at all.

Since we have created this app for children, we’ve made an effort to make sure that it’s safe for children to use. The app does not make use of any dangerous objects. Children also do not need to read anything to use this app, even though at this age they should be able to read. All the instructions and tutorials are visual.

And since this is an app that is meant for use by children, there is a chance for them to do things accidentally as well. To reduce the chances of any accidental usage by the children, we hade made sure to include Parental Supervision features where the parents can look after the numerous settings and options that the app has. Also, since the children themselves don’t need to access these settings, and also since there is a chance that the children may accidentally stumble upon these settings unknowingly, we’ve added a small layer of security which only parents can get past.

“Fun Machine” is an easy to use and fun app and our USP is that it allows bored children to Taking pictures and translate them into recipes or crafts. We havent found anything else similar for children in our market research. SO that’s why we think it will be a good choice in the market.

Additionally, as we are using materials from home, we have thought in a free App, bearing the costs of production through publicity. That is to say, brands that are able to pay a fee to enhance the use of their products. For instance sugra brand, cradboard brands, etc..

Sketch, prototyping and 3D printing

The aim of my thesis project is to develop a story-telling system that would visualise tweets in a virtual space and use these visualisations to create immersive stories based on user inputs.
The research questions at this stage, are:
•What is the role of micro-blogging/social-media text messages in digital storytelling?
•How to visualise text in VR environment?
•How to create stories out of these visualisations?

At this time the user is limited to input a trigger for the story telling machine that will produce a visual story automatically. Future possibilities will include the users being able to interact with the story contents and influence the story telling process.

Since we live in digital environments, with an abundance of media, I find interesting to review how repositories and collections are used to generate new interpretations. Digital libraries like archive.org or freesound.com provide new possibilities of Interactive Digital Narrative, combining and reusing materials to produce new meanings.

One of the most relevant works on this topic, was proposed by the German philosopher Aby Warburg around the concept of “Atlas Mnemosyne”. Collected and reviewed later by the French philosopher Georges Didi Huberman. Warburg made the Atlas in honour of the Titan Greek, Mnemosyne, goddess of memory. (Source of the word mnemonic).

For my thesis, I have developed some sketches that I show below:

On a second stage I have developed some illustrations to describe better the goal of my project:

Addtionally, I also have developed a spheric rig for 360 Video. The STL files were made with SketchUp software.

I’ve been working on a rig model of six cameras, to record videos in 360 degrees like this one:

The model was designed with six boxes for XiaomiYi cameras (The Chinese version of GoPro), around a cube.

Source: Author.

3D Printing:




The design was thought to keep all optics aligned and at the same distance between them, to avoid parallax problems.


SOURCE: Wikipedia More about the parallax effect here .

I also proposed a version for six cameras but in a horizontal alignment rather than spheric, just to have more resolution, but with the problem of black space on the top and the bottom of the video.


Source: Auhtor.

XiaomiYi cameras are very cheap and easy to get and allows us to generate high-quality videos in 360 degrees to distribute on 360 channels like Facebook, Vimeo or Youtube.


Source: XiamoYi

Arduino and processing

You can download all the examples from this: LINK.

My first experiment is controlling drops of rain using the potentiometer.

Following the same idea, I’ve modified a space invaders game controlled by potentiometer and button in Arduino. Sending two signals through the serial port. I declared a variable greater than the potentiometer value to control the button. The potentiometer’s value goes from 0 to 1023. The value’s button is 1 or 0, depending if is pressed or not. If the button is pressed I’ve assigned a value greater than 1023 to know when the button is pressed.

I also designed a simple example in the other direction, from processing to Arduino to control a led.

And finally, thinking about my thesis  I have developed an experiment to change video properties on processing, through Twitter.


Pure Data

I have been working on different experiments with Pure Data. You can download the examples from here.


Starting with simple sketches like opening a file to be able to play it later. Like this one from here:

Captura de pantalla 2017-04-14 a la(s) 12.48.38 p.m.

Or this second example, doing something more complex, visualising its waveform or modifying the speed:

This third example is a “social machine¨. Every time the machine receives an audio input, randomly plays a snore, an applause, or a boo:

Then I was reviewing possibilities of synthesisers in AM, FM. And how to convert MIDI to frequencies. Trying to play with the potential of the software:

I had done before for another module in which I worked on similar concepts with patches made in MAX:

Adaptable objects.

Some reflections around the seminar led by Gearóid Mullins, Rachael Garrett, Billy Verlinden & Shane Cunningham.


One of the concepts that I found most interesting during the class, was the one stated by the American Professor Neil Gershenfeld: “The revolution is not additive versus subtractive manufacturing; it is the ability to turn data into things and things into data.” (Gershenfeld, 2012)

We can date that revolution around 1952, when researchers from the Massachusetts Institute of Technology (MIT), designed a first computer system to control a milling machine; Since then the translation data – objects has become increasingly sophisticated.
For instance, since 2012 “The Tangible Media Group”, led by Professor Hiroshi Ishii, have been working on their vision designated as “Radical atoms”, in which fundamentally materials can change their form, arranging their constraints and establishing their own affordances. Something that they have called MUI “Material User Interface”.

This research area opens many doors on interactive design, especially from the perspective of mutability. If we consider an object that can be dynamic which means that its properties are not established a priori, this forces us to rethink the logic of creation.

Let us take the example from the typical stages in the process of interaction design, (depending on the author may be different)*, that we can summarise as research, design, prototype, test and its subsequent iteration. If the object changes, in other words, if its properties can adapt to the user’s needs, perhaps learning from it; The phases of design, test and prototyping acquires a new significance since they could even be merged. The concept of UCD (User Centred Design) changes as well, to the extent that the user becomes the designer since it is transformed according to their interests. But also the designer’s role moves beyond because its work goes ahead the possibilities initially proposed, creating systems rather than objects, capable of adapting.
* There are several interaction models. Summarising some of them:
-The Winston Royce (1970) interaction model was composed of five stages: Requirement analysis, design, code, test and maintenance.
-Barry Boehm’s model (1988) was a spiral model, consisting of several tasks and analysis.
-The model proposed by the interaction design foundation contains five stages: empathise, define, ideate, prototype and test.

-Gershenfeld, N. (2012) How to Make Almost Anything
The Digital Fabrication Revolution. Available at: http://cba.mit.edu/docs/papers/12.09.FA.pdf (Accessed: 21 February 2017).

-Tangible media group (no date) Available at: http://tangible.media.mit.edu/ (Accessed: 21 February 2017).

-Dam, R.F. (2017) 5 stages in the design thinking process. Available at: https://www.interaction-design.org/literature/article/5-stages-in-the-design-thinking-process (Accessed: 21 February 2017).