[ { "date": "2023-07-14 00:00:00 +0000", "title": "Sonic Mutations", "url": "/works/sonic-mutations/", "content": [{"type":"textblock","content":"In 2023 I worked with [Kopi Su Studio](https://kopisu.studio/) to produce visuals for Sonic Mutations; a generative music exploration developed by Kopi Su with artists Alexis Weaver and Rowan Savage (salllvage).\n\nThe work put bleeding edge AI music/audio generation tech in the hands of artists. Alexis and Rowan each developed a performance incorporated recorded and sampled audio with sounds generated live using the AI tools built by Kopi Su.\n\nI came on to the project to develop accompanying visuals for the performances that would respond live to the performance audio. The secondary goal was to reveal a bit of the AI model's internal state to the audience."},{"type":"figure","shape":"smallvideo","caption":"Sample clip of the visuals developed for Alexis Weaver's piece. Visuals are primarily abstract, and aimed to reflect the mutation of recorded audio in the piece. Text at the bottom reflected the text prompt Alexis used at any given moment and the percentage of denoising. This clip has been sped up.","src":"https://storage.googleapis.com/kb_site_files/images/sonic_mutations_Alexis.mp4"},{"type":"textblock","content":"The visuals for both performances were developed in consultation with the artists. I made a bunch of early experiments that we presented back to Rowan and Alexis and adapted the style based on what they resonated with.\n\nThe works primarily used graphics shaders (opengl) for most imagery, though for Rowans work we additionally used 3d models of a crow and a human, rendered with partial white lines joining points in the wireframe at random."},{"type":"figure","shape":"smallvideo","caption":"Sample clip of the visuals developed for Rowan Savage's piece. Rowan's performance transitioned his voice slowly to the call of a crow, then back to the voice of a human as understood by the generative AI. The visuals tracked this arc transitioning from the model of the human to that of the crow and back again. This clip has been sped up.","src":"https://storage.googleapis.com/kb_site_files/images/sonic_mutations_Rowan.mp4"},{"type":"textblock","content":"\n\nFurther performances with the tool (sonic mutations) and accompanying visuals are planned for this year. The full live stream of the original performance is available to watch [on the Sydney Opera House website](https://stream.sydneyoperahouse.com/outlines/videos/sonic-mutations-a-generative-music-exploration-outlines-2023)."}], "thumbnail": "https://storage.googleapis.com/kb_site_files/images/sonic_mutations_artists.jpg", "hover_thumbnail": "https://storage.googleapis.com/kb_site_files/images/sonic_mutations_artists.jpg", "thumbshape": "3/2", "slug": "sonic-mutations", "hero": "https://storage.googleapis.com/kb_site_files/images/sonic_mutations_hero.jpg", "subtitle": "Live performance visuals for exploratory electronic music" } , { "date": "2022-08-01 00:00:00 +0000", "title": "Open Poses", "url": "/works/open-poses/", "content": [{"type":"textblock","content":"In 2022 I worked with artist [Amrita Hepi](https://www.amritahepi.com/) to develop an interactive digital installation for the exhibition [Primavera 2022](https://www.mca.com.au/artists-works/exhibitions/primavera-2022-young-australian-artists/) at the Museum of Contemporary Art, Sydney. Amrita's work explores \"dance as social function\" and her practice is performed in physical performace spaces but also in the digital spaces of new media. \n\nAmrita's concept for the work was an interactive digital installation in which gallery goers would attempt to manouver and manipulate their bodies in step with hers. In particular, participants in the installation would mimic the body positions captured in a photographic archive of poses created and performed by the artist."},{"type":"gallery","caption":"Open Poses (archive images). Three example images from Amrita Hepi's archive of poses. In each, Hepi kneels or stands in front of a green screen to form a unique body position. © Amrita Hepi.","images":["https://storage.googleapis.com/amrita_artist_poses/amrita2_185.jpg","https://storage.googleapis.com/amrita_artist_poses/amrita2_221.jpg","https://storage.googleapis.com/amrita_artist_poses/amrita2_319.jpg"]},{"type":"textblock","content":"I joined the project as a creative technologist to prototype the work and to write the code for the final version. \n\nThe key challenges of the project, from the initial concept, were to find some way to measure the closeness of a participant's pose to those of Amrita's archive of poses, and to find a way to automatically superimpose a photo of the participant over Amrita's image. "},{"type":"figure","shape":"small","caption":"An early development experiment estimating Amrita's pose using the machine learning model MoveNet. Here certain key joints of the artist's body are identified by the model and visualised in a red wireframe.","src":"https://storage.googleapis.com/kieranbrowne-public-files/images/open_poses_dev_pose.jpg"},{"type":"textblock","content":"Pose estimation is actually a generally well solved problem today, though the complexity of poses in Amrita's archive certainly put the technology through its paces. \n\nMy prototype used the Tensorflow model [MoveNet](https://github.com/tensorflow/tfjs-models/tree/master/pose-detection/src/movenet) to compute the pose data for each photograph in the archive then compare it against the live pose data from my webcam."},{"type":"figure","shape":"small","caption":"An early prototype of Open Poses. The view compares pose data from Amrita (left) to live pose data from the webcam (right).","src":"https://storage.googleapis.com/kieranbrowne-public-files/images/open_poses_dev.gif"},{"type":"figure","caption":"Open Poses (detail). A gallery-goer interacts with Open Poses. © Amrita Hepi. Photograph: Anna Kučera.","src":"https://storage.googleapis.com/kieranbrowne-public-files/images/open_poses_2.jpg"}], "thumbnail": "https://storage.googleapis.com/kieranbrowne-public-files/images/open_poses_thumbnail2.jpg", "hover_thumbnail": "https://storage.googleapis.com/kieranbrowne-public-files/images/open_poses_thumbnail.jpg", "thumbshape": "3/2", "slug": "open-poses", "hero": "https://storage.googleapis.com/kieranbrowne-public-files/images/open_poses_1.jpg", "subtitle": "Interactive Digital Art Installation" } , { "date": "2021-06-01 00:00:00 +0000", "title": "Traces", "url": "/works/traces/", "content": [{"type":"textblock","content":"While I was at Google Creative Lab (2021-2022), one of the longest running projects I worked on was a partnership with Australia's leading Indigenous publishing house, Magabala Books. Through the process we got to work with artist/poet Kirli Saunders and artist/illustrator Kamsani Bin Salleh to design and develop a project that applied digital technology in service of storytelling. The outcome of this was *Traces*; a voice-to-art experience, honouring 60,000+ years of Indigenous storytelling."},{"type":"youtube","videoID":"1SqgdYTXUvw"},{"type":"textblock","content":"The wrap-film above gives a strong sense of the meaning and experience of the work in the words of the artists. Traces was primarily shaped by aspects of Kirli and Kam's arts practices. The projections were generated in response to spoken word; both live in the space and from a poem written and recorded by Kirli, and the visual elements were drawn by Kam and recombined programmatically to form new compositions."},{"type":"textblock","content":"In the rest of this page I'll go into some detail of my role in developing the technical side of the project."},{"type":"textblock","content":"The concept for the work that Kirli and Kam most graviated towards was of a system to illustrate speaking and yarning. Kam is a prolific artist and illustrator, and Kirli an equally prolific artist and poet, so the concept met at the intersection of their practices."},{"type":"textblock","content":"Kam provided an \"alphabet\" of marks and illustrations as a basis for the generative system. The initial task was to convert these provided drawings to a form that we could animate, and recombine for form new compositions."},{"type":"figure","caption":"Kam's illustrations with basic visualisation of the data encoding. Here I'm beginning to extract mark data from the digital files created by Kam.","shape":"medium","src":"https://storage.googleapis.com/kb_site_files/images/traces_stroke_data.png"},{"type":"textblock","content":"Kam has developed a particular visual style of closely woven lines with hidden forms and figures. Our intention was to go some way in producing compositions in his style. To do this we needed to write some code to determine whether a collection of marks were too close (or indeed too far) from those adjacent."},{"type":"figure","shape":"smallvideo","caption":"Testing a system to determine the how close adjacent marks can lie. New marks flash in red when the intersect to closely with existing ones.","src":"https://storage.googleapis.com/kb_site_files/images/traces_intersect_test.mp4"},{"type":"textblock","content":"With these elements, and with Kam's alphabet of illustrations we were able to generatively produce new compositions."},{"type":"figure","shape":"smallvideo","caption":"Early test of composition system. Collections of marks are positioned in appropriate locations around each other to build up a composition.","src":"https://storage.googleapis.com/kb_site_files/images/traces_composition_demo.mp4"},{"type":"textblock","content":"We explored several ideas for the colour palette of the work, but what resonated most with Kirli and Kam was drawing on the colours of Country."},{"type":"gallery","caption":"A selection of satellite images of Country found on Google maps. The colour palettes for the final work were programmatically drawn from satellite imagery following the path of the sun from East to West.","images":["https://storage.googleapis.com/kieranbrowne-public-files/images/traces_palette_1.jpg","https://storage.googleapis.com/kieranbrowne-public-files/images/traces_palette_2.jpg","https://storage.googleapis.com/kieranbrowne-public-files/images/traces_palette_3.jpg","https://storage.googleapis.com/kieranbrowne-public-files/images/traces_palette_4.jpg","https://storage.googleapis.com/kieranbrowne-public-files/images/traces_palette_5.jpg","https://storage.googleapis.com/kieranbrowne-public-files/images/traces_palette_6.jpg"]},{"type":"textblock","content":"In an early experiment I sampled sattelite imagery in a line across Australia and used the imagery to generate colour palettes. This experiment turned into a more intentional sampling process to find beautiful palettes in the landscape from across the entire country."},{"type":"figure","shape":"smallvideo","caption":"Screencapture of early experiment producing colour palettes from the landscape.","src":"https://storage.googleapis.com/kb_site_files/images/traces_palette_experiment.mp4"},{"type":"textblock","content":"Our goal was for the compositions, illustrations and so on to be generated in response to the spoken word. This in itself involved a long process of research to extract key features from audio that could be used to drive the audio in a way the felt natural and responsive. With some help from an audio expert, we extracted volume, pitch, intonation, pulse and pace from the audio waveform."},{"type":"vimeo","src":"https://player.vimeo.com/video/777201177?autoplay=1&loop=1&background=1&muted=1","caption":"The system extracted volume, pitch, intonation, pulse, and pace from the waveform."},{"type":"textblock","content":"These features in the end were key to making illustrations which appeared to flow from the voice. With the exception of pace, all other sonic features were used to either shape the marks or to change the way the were drawn on the composition."},{"type":"textblock","content":"This brings us to motion. To make the illustrations feel responsive the needed flow out slowly for long vowels and hums and quickly for sharp quick sounds, not simply to appear out of nowhere. I developed a bunch of animation experiments to change the way that the marks appeared and make this feel fluid and alive."},{"type":"figure","shape":"smallvideo","caption":"A test of digitally animating Kam's illustrations. I was aiming here to have a fluid motion that felt growth-like.","src":"https://storage.googleapis.com/kieranbrowne-public-files/images/traces_animation_test.mp4"},{"type":"textblock","content":"These elements, illustrations, composition, palettes, audio etc to form the final version of the work. The version you can see below was how Traces was primarily developed. It was adapted for the specific requirements and three dimensionality of the Sydney Opera House's Tide Room projection space for the final performances that can be seen in the wrap film at the top of this page, but that's another story."},{"type":"vimeo","src":"https://player.vimeo.com/video/777208046?autoplay=1&loop=1&background=1&muted=1"}], "thumbnail": "https://storage.googleapis.com/kieranbrowne-public-files/images/traces_1.jpg", "hover_thumbnail": "https://storage.googleapis.com/kieranbrowne-public-files/images/traces_2_slim.jpg", "thumbshape": "3/2", "slug": "traces", "hero": "https://storage.googleapis.com/kieranbrowne-public-files/images/traces_1.jpg", "subtitle": "Voice Reactive Art Installation / Projection Work" } , { "date": "2018-07-18 00:00:00 +0000", "title": "Trace", "url": "/works/trace/", "content": [{"type":"textblock","content":"*Trace* is an interactive drawing performance work installed at the National Gallery of Australia for an exhibition on performative drawing."},{"type":"textblock","content":"The work repurposed a surveillance camera installed in the space to trace the movement of visitors to the exhibition. This surveillance footage, captured from above, was blurred and processed with custom software to produce an inky mark which would appear to smear across a screen installed in the space."},{"type":"youtube","videoID":"-0mE6mXCNZs"}], "thumbnail": "https://storage.googleapis.com/kb_site_files/images/trace_1.jpg", "hover_thumbnail": "", "thumbshape": "3/2", "slug": "trace", "hero": "https://storage.googleapis.com/kb_site_files/images/trace_1.jpg", "subtitle": "Interactive Digital Drawing Performance" } , { "date": "2018-06-04 00:00:00 +0000", "title": "Soft Bodies", "url": "/works/soft-bodies/", "content": [{"type":"textblock","content":"Soft bodies is a graphics experiment, written for the browser that makes use of OpenGL shaders and the [\"ray marching\"](https://en.wikipedia.org/wiki/Ray_marching) algorithm."},{"type":"textblock","content":"This is a technique that uses distance functions, rather than geometry to produce 3 dimensional imagery. Because no geometry is being manipulated its possible to manipulate the resulting shapes fluidly."},{"type":"textblock","content":"What I found while experimenting with this technique is that the resulting imagery can look surprisingly organic. Though clearly abstract, the shifting fleshy forms occasionally have flashes of resemblance to body parts; a knee-cap, shoulder or clavicle."},{"type":"gallery","caption":"Screenshots of \"soft bodies\" at particularly human-like moments.","images":["https://storage.googleapis.com/kb_site_files/images/soft-bodies-1.jpg","https://storage.googleapis.com/kb_site_files/images/soft-bodies-2.jpg","https://storage.googleapis.com/kb_site_files/images/soft-bodies-3.jpg"]},{"type":"textblock","content":"The shader that produced these images should be visible below. If it does not appear, try refreshing the page or opening this page on a different device."},{"type":"iframe","src":"https://kieranbrowne.com/shader/soft-bodies/"}], "thumbnail": "https://storage.googleapis.com/kb_site_files/images/soft-bodies-1.jpg", "hover_thumbnail": "https://storage.googleapis.com/kb_site_files/images/soft-bodies-2.jpg", "thumbshape": "1/1", "slug": "soft-bodies", "hero": "https://storage.googleapis.com/kb_site_files/images/soft-bodies-1.jpg", "subtitle": "WebGL Experiment" } , { "date": "2018-04-21 00:00:00 +0000", "title": "The Other Side", "url": "/works/the-other-side/", "content": [{"type":"textblock","content":"The Other Side is a ritual performance of the mathematics of an artificial neural network with pre-digital technology. The performance references the rites and nomenclature of the modern spiritualist movement whose devotees held gatherings and séances across the western world around the turn of the 20th century hoping to communicate with spirits of the dead. By manually performing the neural network, the work frames deep learning as communion with an emergent intelligence, presenting black-box neural networks as divination and questioning contemporary narratives of machine learning as artificial intelligence."},{"type":"textblock","content":"The ritual is enacted on a large analog computer made of several concentric discs, a string of wooden beads and a book which holds the weights and biases of the network."},{"type":"figure","shape":"small","caption":"Diagram showing the design of the computing apparatus. The outer two rings operate as a slide-rule (for multiplication) while the inner two add up the results of successive multiplications.","src":"https://storage.googleapis.com/kieranbrowne-public-files/images/seance-apparatus-diagram.jpg"},{"type":"textblock","content":"Strange as it may sound, to compute an \"artificial neural network\" you only need addition and multiplication. These operations are the same whether done with pen and paper, with a digital calculator, or in the case of this work; with a wooden slide-rule."},{"type":"figure","src":"https://storage.googleapis.com/kb_site_files/images/seance_2.jpg","caption":"\"The Other Side\" as performed at AD Space gallery for The Fig Tree exhibition, September 2019. Photo by Kim Browne."},{"type":"textblock","content":"Comparisons between machine learning and magic are common even amongst experts and practitioners. Although the mathematics of neural networks are well defined, the models they produce are invariably complex and indecipherable. They work, but it is difficult to explain why or how. When used in this way, deep learning is a form of divination, an arcane set of steps that delivers answers without explanations. Without explanation, to trust the network is an act of faith."},{"type":"textblock","content":"The work was first performed for the CHI conference in Montreal in April 2018. It was performed for a second time for The Fig Tree exhibition at AD Space in September 2019."},{"type":"textblock","content":"Another performance made for youtube can be viewed below."},{"type":"youtube","videoID":"1Y8MqnXYLQU"}], "thumbnail": "https://storage.googleapis.com/kb_site_files/images/seance_small.jpg", "hover_thumbnail": "https://storage.googleapis.com/kb_site_files/images/seance_small_2.jpg", "thumbshape": "3/2", "slug": "the-other-side", "hero": "https://storage.googleapis.com/kb_site_files/images/seance.jpg", "subtitle": "Performance Artwork and Associated Artefacts" } , { "date": "2018-01-01 00:00:00 +0000", "title": "Variations", "url": "/works/variations/", "content": [{"type":"textblock","content":"In 2018 I did a series of daily generative art experiments attempting to recreate with code numerous paintings, drawings and prints of 20th century modern art."},{"type":"textblock","content":"For these studies I would find an artwork on a museum website (usually the Tate) and try to reproduce it as closely as I could in code. After creating a close copy I would play with the resulting code to produce variations of the composition. The resulting images are somewhere between copies and originals, being technically novel, but deeply indebted to the creativity of the original author."},{"type":"gallery","images":["https://storage.googleapis.com/kieranbrowne-public-files/images/variations_2.png","https://storage.googleapis.com/kieranbrowne-public-files/images/variations_53.png","https://storage.googleapis.com/kieranbrowne-public-files/images/variations_3.png","https://storage.googleapis.com/kieranbrowne-public-files/images/variations_8.png","https://storage.googleapis.com/kieranbrowne-public-files/images/variations_7.png","https://storage.googleapis.com/kieranbrowne-public-files/images/variations_9.png","https://storage.googleapis.com/kieranbrowne-public-files/images/variations_5.png","https://storage.googleapis.com/kieranbrowne-public-files/images/variations_4.png","https://storage.googleapis.com/kieranbrowne-public-files/images/variations_6.png"]}], "thumbnail": "https://storage.googleapis.com/kieranbrowne-public-files/images/variations_53.png", "hover_thumbnail": "https://storage.googleapis.com/kieranbrowne-public-files/images/variations_3.png", "thumbshape": "1/1", "slug": "variations", "hero": "https://storage.googleapis.com/kieranbrowne-public-files/images/variations_53.png", "subtitle": "Generative Art Experiments" } , { "date": "2017-09-07 00:00:00 +0000", "title": "Pixel Sorting Experiments", "url": "/works/pixel-sorting/", "content": [{"type":"textblock","content":"A series of experiments arranging the pixels of one image to form another. Each image is a permutation of the colours pixels from a modern painting."},{"type":"gallery","images":["https://storage.googleapis.com/kb_site_files/images/pixel_sort_1.jpg","https://storage.googleapis.com/kb_site_files/images/pixel_sort_3.jpg","https://storage.googleapis.com/kb_site_files/images/pixel_sort_2.jpg"]}], "thumbnail": "/images/pixel-sort-thumbnail.jpg", "hover_thumbnail": "/images/pixel-sort-2.png", "thumbshape": "1/1", "slug": "pixel-sorting", "hero": "https://storage.googleapis.com/kb_site_files/images/pixel_sort_hero.jpg", "subtitle": "" } , { "date": "2015-04-01 00:00:00 +0000", "title": "Artificial Artists", "url": "/works/artificial-artists/", "content": [{"type":"textblock","content":"*Artificial Artists* is a series of art machines that contribute to the creation of visual \"ideas\" as well as the production of images. The machines vary in style and each was created to express a unique persona. The aim with the works is to push the limits of non-human artistry."},{"type":"youtube","videoID":"RjujVJF5MrE"}], "thumbnail": "/images/ava-1.jpg", "hover_thumbnail": "https://storage.googleapis.com/kieranbrowne-public-files/images/artificial_artists_1.jpg", "thumbshape": "3/2", "slug": "artificial-artists", "hero": "https://storage.googleapis.com/kieranbrowne-public-files/images/artificial_artists_1.jpg", "subtitle": "" } , { "date": "2013-08-01 00:00:00 +0000", "title": "Nest", "url": "/works/nest/", "content": [{"type":"textblock","content":"*Nest* is a generative film/installation intended to be viewed as a projection into the viewers hands. Each of three channels explore a concept of the universe."},{"type":"figure","shape":"medium","src":"https://storage.googleapis.com/kb_site_files/images/nest_2.jpg","caption":"Film as installed. To view the film in focus, the viewer cups their hands under the projector."},{"type":"textblock","content":"Below the three channels of the film can be viewed side by side."},{"type":"youtube","videoID":"PTA159iEtbg"}], "thumbnail": "/images/nest-thumbnail.jpg", "hover_thumbnail": "", "thumbshape": "3/2", "slug": "nest", "hero": "https://storage.googleapis.com/kb_site_files/images/nest_1.jpg", "subtitle": "" } ]