Today we presented our ideas on the logical workflow between all parts. We even discussed more interface design explorations and code problems.
My main task right now is in animations and I came up with the following work-flow including different Adobe programs. I first create the interface backgrounds by combining illustrator and photoshop (it depends on the needed output). Some elements are created with the help of a free-hand drawing tablet and others are created through shaping and in-program-drawing. The moving objects (our main concern is the fish) are then animated in Photoshop. The cs6 version combines the timeline function in order to create interactive material. After a lot of consideration I decided to create gif formats as an output, which then easily were importable into the next program: Edge animate. Here all the elements come together and with the help of layers a final version can be created. The bonus with edge animate is that it leaves me (with a design background) lots of freedom during the design process and immediately provides me with a finished HTML5 code.
This code then is connected to our platform and later additionally the platform is connected to the biosensors which are connected to the body. In that way the interface should gain the input from physiological signals, which are then detected by the biosensors – send to the central node and in real-time the user can see the interface reacting to their body signals…at least that’s the plan.