FUTURE FRUIT PREDICTOR

A Generative Art Based on Sounds

What kind of fruit will grow on the left wreckage after human extinction?

- Project Intro -

 
IMG_3522.jpg
 

Project Brief

Train and design a set of AI-controlled devices to interpret the meaning of the word “posthuman”.

 

About

This is a predictor that converts sound information to environmental factors by trained AI, and forecasts about new kinds of fruits that will grow in the future.

This project is based on Wekinator. The training of a series of real recordings of natural sounds will enable the AI to “imagine” the environment of the future fruit from any sound input.

This work stimulates the audience's environmental awareness. And I also had a lot of thinking about the man-made bias in the process of training AI.

 

Project Info

Project Type

Solo

Time

4 Weeks

 

Tools

Wekinator

Processing

Adobe Photoshop

Skills

Machine Learning

JavaScript

Prototyping

 

Course

Posthuman AI Culture (Fall, 2019)

Instructor

Christine Meinders

 - Overview -

Input - Sound

Input - Sound

Software - Wekinator

Software - Wekinator

Output - New Fruit

Output - New Fruit

 - Demo -

1

Ideation

Purpose

Can humans create a new species of fruit?

What will the fruits be like in the future?

What kind of fruit will grow on the left wreckage after human extinction?

While this program predicting fruit in the future, it also provokes people’s reflection on the relationship between humans and nature, nature, and other creation.

Cultural+AI+Tool.jpg
 

Insights

The entire process of creating this predictor was a process of step by step to practicalize a very abstract concept. My original idea was very simple: I wanted to see how will the fruits be like in the future, but I didn’t want to predict by human brains, because human brains have bias, instead I wanted to use AI, to reduce bias.

To practicalize the idea, I divided the whole process of prediction into two parts: 1, data input and output outside of the AI, and 2, the data processing inside of the AI.

For the data input and output, I needed to determine the type of the input data, the purpose of the output data, and the possible bias that may exist in the process.

For the data processing inside the AI, I needed to consider according to what rule the sound would be translated into digital environmental info, and how this info would be translated into a visual fruit image.

AI+Notes-1.jpg
AI+Notes-2.jpg

2

AI Training, Input & Output

AI Training: Sound

As said, I use sounds of nature as the input of my generative art. I collected a series of the sounds of trees, wind, birds and etc. Then I decided in what dimensions I wanted the AI to interpret the sound. I set sunshine, precipitation, windiness, temperature, and the number of animals that will eat the fruit as the 5 dimensions and I rated each one based on my own perception after I listen.

Here is where I found that though I was going to prevent human bias, the human bias was unavoidable since there was no such scientific standard that can measure the precipitation in a soundtrack. However, there was no better solution, so I had to continue with my worries.

 

Surprises

In fact, when I got to this step, I found that the change of wind in the sound resources I used to train ai was not large enough, and most of them were concentrated in level 2 to 7. While in terms of level 1 to 2 and 7 to 10, I could only hope the AI to associate by itself. I was worried that AI will not recognize or be unstable under extreme wind conditions when it is applied. But as a result, it seems that AI's still quite sensitive to wind noise and it has a certain degree of stability, which exceeds my expectations.

 

Natrual sound source: https://freesound.org/

download.gif

Rules

Fortunately, I found that there were a lot of studies about the fruits. I found that berry was actually a different fruit type from drupe, and nut was surprisingly categorized as fruits. For different fruit types, there was also a lot of researches done about the environment they grow up in. So, the AI prediction would predict based on the research results instead of my own opinion.

Some rules:
Sun+ = Saturation+
Rain+ = Juicy+
Wind+ = Size-

 

Fruit classification rule source: https://www.fairchildgarden.org/portals/0/docs/education/downloadable_teaching_modules/flower%20power/fruit_classification1.pdf

fruit_classification1_Page_1.png
fruit_classification1_Page_2.png
fruit_classification1_Page_3.jpg

Output: Premade PNGs

I was less restricted when I was designing the output. I understood it more as the expression of AI rather than really an accurate prediction of the fruit in the future. So I collected 32 illustrations of the fruits. Using them as the base, just simply changing the color, size, and etc by code, they became the very simple visual representation of the future fruits.

 
WeChat Image_20210324233752.png

3

Prototype

Feasibility Test

Testing the parameters with a square

ezgif-6-59de8ce26c37.gif
 

Prototype 1

The problem of the output being too jumpy was fixed by having arrays for the input data and taking the average of the them.

download (1).gif

Final Demo

Video (better watch with sound)
3 main parts

 
 
 
Input - Sound

Input - Sound

AI Software - Wekinator

AI Software - Wekinator

Output - New Fruit

Output - New Fruit

 Reflection

First of all, on the technical level, I think my predictor has not yet reached my most ideal state. It is obvious that the fruit it predicts often jumps between different types, but I hope instead it can smoothly transition from one fruit type (such as plum) to another fruit type (such as berries). In addition, it is still unable to make a good prediction of the shape of the fruit, instead, now it can only use what it thinks is the closest among the known fruit types. I think these two problems can be solved by introducing a more image-based AI algorithm.

Secondly, on the conceptual level, the more the project progressed, the more I found that the bias on the data input side was difficult to eliminate. For example, when I define the amount of wind in an environmental soundtrack, I can only tell AI according to my intuition, "Track 1 was recorded at wind level 8." And how to define wind level 8? If it is defined by my senses, is the gap between level 8 and level 7 equal to the gap between level 9 and level 8? Is it possible that level 8 I think is level 6 in other’s ears? My intention in making this predictor was to eliminate the personal biases that humans carry when making predictions, but these thoughts reminded me that in fact, even if it is AI, I can't fully believe that it can be unbiased.

 
YiXie_ProcessStory_ReflectionDiagram.jpg