Posted in

The problem with deepfake AI content in Korea

AI is a technology that, like any other technology, could be wonderful and useful if used properly. I use it daily for many things, including creating images for this blog or even practising conversations in Korean. Can learn many things with it, and even though it is not a perfect tool, and data should always be double-checked, it is useful and will impact our lives even more in the future.

However, as with any other thing in life, some evil people use technology to harm others; this is nothing new, it has always happened when a disruptive technology is introduced. Today I am going to talk about the problem of sexual content deep fakes in South Korea.

AI image generated with AI.

Recently the Korea Communications Standards Commission stated that they reviewed 15,808 videos of deepfake sexual content. This number has been growing in recent years, after the AI boom, and this problem is getting more and more concerning because AI will only get better. Also, the internet has a thing that could be very good or very bad depending on the context. If anything is released to the internet, it is very hard to remove. This could be good if anyone tries to censor it, but very bad in these cases.

I think that this problem is impossible to solve with the approach of removing videos. They should impose very severe fines and prison sentences on the people who do this. Also, a lot of education is needed, especially in young generations, which should be specifically designed for them. And if a teenager is caught doing this to anyone, they should be punished very severely to serve as an example to the others, for example, having to repeat the whole year and do community service for a very long time (not a fine that at the end their parents are going to pay).

The problem is with the mafia around this and the adult offenders. Recently, laws for preventing this kind of crime have become severe due to social alarm that this kind of crime has generated in Korean society. But I am a bit pessimistic, nowadays almost everyone has a camera with themselves, and with nowadays AI, it is not that difficult to create explicit content with only an image. The only way is self-regulation and trying to block the AI prompts to generate this content, and video removal tools as well. Time will tell, but it seems that this is the only way.

Source. Yonhap news


Discover more from Kimchi diary

Subscribe to get the latest posts sent to your email.

Leave a Reply

Your email address will not be published. Required fields are marked *