From 726b62f304b1adfe33c769202ebb56c898b06e92 Mon Sep 17 00:00:00 2001 From: Harry Stuart <42882697+HStuart18@users.noreply.github.com> Date: Wed, 1 Jan 2020 15:17:14 +1100 Subject: [PATCH] Update README.md --- README.md | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 508573d..c410e12 100644 --- a/README.md +++ b/README.md @@ -6,8 +6,9 @@ Since the inception of generative adversarial networks, I have been fascinated b Music generation has many different and exciting potential applications such as: -Providing melody inspiration to artists -Creating infinite, unique and free music without the need for audio file storage (for retail shops, restaurants, cafes, video games, radio stations etc.) +- Providing melody inspiration to artists +- Creating infinite, unique and free music without the need for audio file storage (for retail shops, restaurants, cafes, video games, radio stations etc.) + GANs are already well-established in the image-processing domain, but not so much in NLP or audio-processing due to their sequential structure. After some investigaton, I learned about WaveGAN. So, I set out to adapt WaveGAN for piano in Tensorflow 2.0 using WGAN-GP as my training mechanism (as recommended by the paper). ## What it does