Rather than sharing the thoughts and feelings of one artist, how could public art be used to reflect the thoughts and feelings of the community members who experience and interact with it?
New machine learning models, such as diffusion, allow for generating images of anything you can imagine. For this project, I was interested in taking this technology out of the computer and using it to give full creative control to someone interacting with an art installation. Further, I wanted to make the process of generating and experimenting with audio and visuals as seamless and intuitive as possible.
Using diffusion, sentiment analysis, and regression analysis, this project allows a user to speak any content they wish to display and watch it appear. Ambient music also plays, matching the emotional quality of the provided prompt. The user can then play with a touch screen UI that interacts with a regression analysis model through Wekinator to further adjust audio and visual effects.
Rather than sharing the thoughts and feelings of one artist, how could public art be used to reflect the thoughts and feelings of the community members who experience and interact with it?
New machine learning models, such as diffusion, allow for generating images of anything you can imagine. For this project, I was interested in taking this technology out of the computer and using it to give full creative control to someone interacting with an art installation. Further, I wanted to make the process of generating and experimenting with audio and visuals as seamless and intuitive as possible.
Using diffusion, sentiment analysis, and regression analysis, this project allows a user to speak any content they wish to display and watch it appear. Ambient music also plays, matching the emotional quality of the provided prompt. The user can then play with a touch screen UI that interacts with a regression analysis model through Wekinator to further adjust audio and visual effects.