About.

PARTLY TRUE*

ArtGallery_AI is an NFT project which brings artificially generated art to Cardano. Our art is imagined by a GAN (Generative Adversarial Network). Made with love and passion, on Earth! This is our first step into the NFT world, and which Blockchain could be better than Cardano? 

Our vision is to bring high quality, completely original NFT art to live. We are currently working on new stuff!

The artwork displayed in our gallery is created by a GAN, on the basis of the Cardano blockchain. The generated work, both in color and in shape, is unique.

So how does this work?

The image creation process (a GAN) consists of two steps:

Detect unknown forms and then produce a new image out of them to achieve the correct output (on time estimation, learning, and example based); Generate an output image which matches a given template (correct in general) and is good in particular (generative: no copyright infringing content allowed); Algorithm that we have developed specifically for this project is based on a variant of the Generative Adversarial Network (GAN). The GAN in this case uses an internal “learning rule” or “vector”. Essentially, we let two different “genes” (inputs) interact with each other to find the correct output. A gene can be one single word, such as “dog”, or it can be a color combination, for example: yellow and blue. A GAN is a “no-holds-barred” algorithm where both the input and the output are completely random (data randomization is needed for a good GAN), and is called a adversarial network. The “genes” in this case are the individual files created by our artists using the GDAL image creation tool. The “generation vector” is the basis of the genetic algorithm. The GAN uses the structure of an “adversarial example”: one of our artists (the “winner”) creates a given input on the basis of his knowledge about an image, i.e. his “genes”. This image is transformed to the target output according to the “learning rule”. The second artist (the “bluffer”) creates a random “insult” using one of his internal “gene” files. The “insult file” is sent to the first artist. The first artist would then see the insult (sometimes stylized) and the rest of the image (usually distorted). The GAN tries to find a “good image” from the data, given that the trainer file is not a valid input (in our case, we use GDAL).

In our case, a famous “color” in the image (pink), is usually used as a “genotype”, and the source image (dog) as the “reaction vector”. Because pink is a popular color, we produce a good pink image out of the given data (like dog) to allow a strong reaction from the audience. The reaction vector is always higher than the random vector, and has been the basis of many GANs. Our “training” data (sizes, colors, placement of “insults”) is called the “V” file. We developed an optimization algorithm to bring the learning process more precisely and faster (even the GANs’ “eyes can see more”, but with less power).

Determining the output shape is called the “A” file, and then all images that would form the final output are merged (randomly) into a single “t” file. The “t” file is used to generate the output of our “training” data (sizes, colors, placement of “insults”). The “t” file is the basis of our pipeline: the GAN generator (to find the form of the desired output), the GAN renderer (to produce a certain output shape), and the output renderer (to produce the final output).

Estimating the time taken to create a work from data is another challenging problem. Although there are many ways to solve this problem, our main aim was to use the real time estimation for both the training and the evaluation part (the pipeline).

Evaluating the success of the works during the training stage also requires a “training to perform” approach. We use AI to execute the requested work, given the images provided by the artists.

In this project, the GAN is a recurrent neural network, which is a neural network with the operation of a sequence of operations which generate a specific output for each individual operation. Our GAN is also a computational neural network (which consists of at least 3 stages, with a constant communication between them).

We use a single GAN generator, which in this case is written in python and that has one network-of-n gANs. This is a special kind of GAN, where we use multiple GAN generators, each that generates a different output for each individual generator network (genser). We can save the produced images in a word array (the one from the last stage) and then reuse them later, for example, to show the “good form” (which is the final output of the V files).

We use three common GAN techniques: “stochastic generative adversarial network”, “generative adversarial network”, “mixed network”. The main difference between these 3 techniques is the approach to the form of the generated outputs (from “samples”):

Stochastic Generative Adversarial Network – generates outputs based on the data provided by the generator, and uses an adversarial vector (called the “T-SNAPs”, the basis of the generator) to “generate” some form of the output. 

*Text written by AI – not 100 % accurate, but creative!