-
Amazing new AI program transforms photos into gorgeous amine artwork【Video】
投稿日 2018年6月8日 14:00:56 (ニュース)
-
続・お知らせ。海外セレブゴシップ&ニュース
-
お知らせ
-
We spot the legendary dekotora Aki Kannon, dedicated to actress and singer Aki Yashiro
-
New Pokémon GU collaboration dresses all the family, including babies for the first time
-
How do European Cup Noodles taste to a Japanese palate?
-
We’ve been doing it wrong – Japanese genius shows us how we should all be making sandwiches【Pics】
-
The future is now with full face sunglasses
-
Natto-infused ramen is a thing — we tried it, we love it【Taste test】
-
Jellyfish and Halloween in perfect harmony at Sumida Aquarium event
-
Japanese toilet paper collection opens our minds as we open our butts
-
Aomori police on the lookout for man shouting unsolicited advice at kids about dating and ramen
-
Former Arashi members Sho Sakurai and Masaki Aiba get married… Wait, that didn’t come out right
-
Creator of Japan’s longest-running manga, Golgo 13, passes away, leaves fans one last gift
-
Crazy cheap cosplay at Daiso? How to transform into Dragon Ball’s Vegeta at the discount shop
-
7 Halloween-themed afternoon teas you won’t want to miss this year
-
Krispy Kreme Japan creates doughnut burgers that are a meal and two desserts all in one【Photos】
-
Get in the damn drift car, Shinji? Evangelion Tomika toy brings D1 machine home in miniature form
-
Demon Slayer Nichiren Blades ready for new duty: Slicing through your sweets as dessert knives
-
Man in Japan arrested for breaking into ex-girlfriend’s apartment to steal her Nintendo Switch
-
The Japanese Internet chooses the top too-sexy-for-their-own-good male voices in anime
-
First-ever Studio Ghibli x Russell Athletic range pays homage to My Neighbour Totoro
-
Super Nintendo World at Universal Studios Japan announces first expansion with new Donkey Kong area
-
Cup Noodle pouch satisfies our never-ending need for instant ramen
-
Retro Japanese train is our new favourite office space
-
How do Japanese fans feel about Netflix’s live-action Cowboy Bebop opening sequence?
-
We try Uniqlo coffee at first-ever cafe inside Ginza flagship store
-
The number of elderly people in Japan this year has yet again smashed multiple records
-
Mr. Sato broadens his home drinking horizons at Kaldi【Japan’s Best Home Senbero】
-
We try Japanese Twitter’s newest trend the Penguin Egg, end up hatching something very disturbing
-
Takoyaki makers surprisingly good at grilling meat for yakiniku too
Sponsored Link
Monkeys with typewriters can write Shakespeare but can machines create anime as well as humans?
The automation of a great deal of manual labour and factory work over the past few decades has, depending on how you look at it, either deprived honest hard-working humans of their livelihood or freed millions from a life of toil to focus on more important things. In changes to the working environment unprecedented since the Industrial Revolution, machines have taken on more and more jobs once done by people. Until very recently, though, computers or pieces of industrial machinery have been limited to repetitive actions limited to the parameters of their programming. It’s hard to feel worried when the most advanced robots in the world struggle to manage something as simple as walking, stumbling all over the place like drunken toddlers. Besides, we have something that differentiates us from animals and machines, the divine spark of creativity. Computers can’t write a poem or draw a cute moe anime character, right?
Actually, that last one might be about to change.
Using Deep Learning, an analytical learning method, an undergraduate from China’s Fudan University has been attempting to create a programme that can give photographs of real people an anime-style makeover. The theory is that each iteration of the programme, with its successes and failures, informs the next, so that its ability to create anime versions of people should only ever get better. Yanghua Jin, the student behind the project (whose anime art-creating AI we also looked at last year), discussed his work at a Deep Learning workshop held in Tokyo in March this year, and his presentation can be seen in the video below.
Jin introduces the concept of Generative Adversarial Networks (GAN), and their role in improving the project’s adaptation of photographs. Simply put, GAN uses two networks, one of which (known as the generator) is focused on producing anime images from photos by using attributes taken from anime images, as in the image from the presentation below. The generator studies a number of attributes of images, such as hair or eye colour, whether the hair is long or short, and whether the mouth is opened or closed. It also recognises ‘noise’, such as the proportions of eye size to the rest of the face, or the angle at which the figure is posing. After doing this, it produces anime versions of photos in an attempt to ‘fool’ the other network, the discriminator.
The discriminator then compares the images the generator has produced against its library of images to determine whether the image is synthetic or a genuine image. The two networks learn from their mistakes, the generator gradually becoming better at producing images, while the discriminator becomes better at judging whether images are successful or not. Jin explains that CycleGAN, a form of GAN software, has often been used successfully to apply different textures onto images or footage, and is used by companies that want to visualise things like interior decorating changes. He gives the example of video footage of a horse where zebra stripes have been applied to the video by the GAN process, and also explains how the irregular, unrealistic proportions of moe anime features presents problems when working with GAN.
Sponsored Link
The early versions, and uncalibrated images, wouldn’t have many professional artists all that worried, varying from abstract swirls of colours to characters with deformed features.
But within a relatively short space of time, later generations of the GAN was producing much more realistic images, such that the naked eye might struggle to tell whether the image had been drawn by hand or computer.
Since the images should only ever improve, it’s clear that this kind of technology can be successfully applied to a number of the creative industries. While this project is the work of an undergraduate student and his anime-loving friends, the potential is obvious. Not all Japanese anime lovers, though, seemed all that upset that computers might be taking over their beloved art form.
‘The progression is so quick, at first they looked like monsters, but now…’
‘If everything becomes computer-made that would be boring, but with this kind of genre it’s all right.’
‘So computers can make images. How long until they’re writing and drawing manga?’
‘Whoa, those computer-generated moe girls are really cute.’
‘How long before the computer gets a taste for it and starts producing hentai?’
With talented artists needed to create the data sets the programme bases its work on, or to supplement them with drawings of anime boys which Yanghua Jin explains are much harder to come across online, there might still be some jobs going in the anime industry in the future. For the rest of us, our job will be to feed and maintain our robot overlords, but at least we’ll be kept entertained with anime while we do it. Those hoping to get Japanese visas for anime work might also want to get a move on.
Source: YouTube/AIP RIKEN via Jin
Top image: Pixabay
Insert images: YouTube/AIP RIKEN
Source: SORA NEWS24
Sponsored Link
最新情報