There. It isn’t even a “real” racism, it’s more of a flamebait, where the more outrageous and deranged a take is, the more likely it would captivate attention and possibly even provoke a reaction. Most likely they primarily wanted to earn some buck from viewer engagement, and didn’t care about the ethics of it. Maybe they also had the racist agendas, maybe not - but that’s just not the core of it.
And in the same spirit, the issue is not really racism or AI videos, but perversely incentivized attention economics. It just happened to manifest this way, but it could’ve been anything else - this is merely what happened to hit some journalist mental filters (suggesting that “racism” headlines attract attention those days, and so does “AI”).
And the only low-harm way - that I can think of - how to put this genie back in the bottle is to make sure everyone is well aware about how their attention is the new currency in the modern age, and spend it wisely, being aware about the addictive and self-reinforcing nature of some systems.
Stop trying to blame technology for longstanding social problems. That's a cop out.
if you want to use ML to do anything at all with image and video, you will usually wind up creating the capability to generate image and video one way or another.
however building a polished consumer product is a choice, and probably a mistake. every technology has good and bad uses, but there seem to be few and trivial good uses for image/video generation, with many severe bad uses.
Generating and distributing racist materials is racist regardless of the intent, even if the person "doesn't mean it".
Simple thought experiment: If the content was CSAM, would you still excuse the perpetrators as victims of perversely incentivized attention economics?
Racism is just less legally dangerous. There would be people posting snuff or CSAM videos, would that “sell”. Make social networks tough on racism and it’ll be sexism next day. Or extremist politics. Or animal abuse. Or, really, anything, as long as people strongly react to it.
But, yeah, to avoid any misunderstanding - I didn’t mean to say racism isn’t an issue. It is racist, it’s bad, I don’t argue any otherwise. All I want to stress is that it’s not the real issue here, merely a particular manifestation.
Google wouldn't even need a fingerprint, they could just look up from their logs who generated the video.
It is very scary because the "tech-bros" in the movie pretty much mimic the actions of the real life ones.
i.e. delete your facebook, your tiktok, your youtube and return to calling people on your flip phone and writing letters (or at least emails). I say this without irony (The Sonim XP3+ is a decent device). all the social networking on smart phones has not been a net positive in most people's lives, I don't really know why we sleep walked into it. I'm open to ideas how to make living "IRL" more palatable than cyberspace. It's like telling people to stop smoking cigarettes. I guess we just have to reach a critical mass of people who can do without it and lobby public spaces to ban it. Concert venues and schools are already playing with it by forcing everyone to put their phones in those faraday baggies so maybe it's not outlandish.
The description of the channel on YouTube claims: "In our channel, we bring you real, unfiltered bodycam footage, offering insight into real-world situations." But then if you go to their site, https://bodycamdeclassified.com/, which is focused on threatening people who steal their IP, they say: "While actual government-produced bodycam footage may have different copyright considerations and may be subject to broader fair use provisions in some contexts, our content is NOT actual bodycam footage. Our videos represent original creative works that we script, film, edit, and produce ourselves." Pretty gross.
Admittedly that didn't satirize CSAM material, rather it cut hard into the reflexive reaction people have at the very thought of CSAM and peodophiles.
https://en.wikipedia.org/wiki/Paedogeddon
Moreover, that took a human to thread that needle, it'll be a while before AI generation can pass through that strange valley.
If I told you many 14 year olds were making very similar offensive jokes at lunch in high school, would you support adding microphones throughout schools to track and catch them?
Gonna be hard to admit, but mandatory identity verification like in Korea, i.e attaching real consequences to what happens in the internet is more realistic way this is going to be solved. We've have "critical thinking" programs for decades, it's completely pointless on a aggregate scale, primairly because the majority aren't interested in the truth. Save for their specific expertise, it's quite common for even academics to easily fall into misinformation bubbles.
But I doubt most doomscrollers would notice that in their half-comatose state.
It IS real, unfiltered bodycam footage. From an actor, following a script, in front of one or many other actors, also following scripts. I think that's how they get away with it, they don't specify it's bodycam footage from actual law enforcement. Yes, gross.
I like this reasoning. “Trolling” is when people post things to irritate or offend people, so if you see something that’s both racist and offensive then it’s not really racist. If you see somebody posting intentionally offensive racist stuff, and you have no other information about them, you should assume that the offensiveness of their post is an indicator of how not racist they are.
Really if you think about it, it’s like a graph where as offensiveness goes up the racism goes down becau
If I have to encounter a constant barrage of shitty racist (or sexist, or homophobic, or whatever) material just to exist online, I'm going to pretty quickly feel like garbage. (If not feel unsafe.) Especially if I'm someone who has other stressors in their life. Someone who is doing well, their life otherwise together, might encounter these and go, "Fucking idiots made a racist video, block."
But empathize with someone who is struggling? Who just worked 18 hours to make ends meet to come home and feed their kids and pay rent for a shitty apartment that doesn't fit everyone, and their kid comes up to them asking what this video means, and it just... gets past all their barriers. It wedges open so many doubts.
This isn't harmless.
It's a bowl of fun size candy bars, with a few razors, a few drugs, a few rotten apples, etc. mixed in. You can, by and large, get the algorithm to serve you nothing but the candy, but you are still eating only candy bars at that point.
Some people can say no to infinite candy. Other people, like myself, cannot and it's a real problem.
LLMs don't think, and also have no race. So I have a hard time saying they can racist, per se. But they can absolutely produce racist and discriminatory material. Especially if their training corpus contains racist and discriminatory material (which it absolutely does.)
I do think it's important to distinguish between photoshop, which is largely built from feature implementation ("The paint bucket behaves like this", etc.), and LLMs which are predictive engines that try to predict the right set of words to say based on their understanding of human media. The input is not some thoughtful set of PMs and engineers, it's "read all this, figure out the patterns". If "all this" contains racist material, the LLM will sometimes repeat it.
Definitely have watched enough videos from this channel to recognize its name. :(
Clutch your pearls as much as you want about the videos, but forcibly censoring them is going to cause you to continue to lose elections.
But, yeah, as weird as it may sound, you don’t have to be racist (as in believing in racist ideas) to be a racist troll (propagate racist ideas). Publishing and agreeing with are different things, and they don’t always overlap (even if they frequently do). He who had not ever said or wrote some BS without believing a single iota of it but because they wanted to make some effect, throw the stone.
And not sure how sarcastic you were, but nothing I’ve said could possibly mean if something is offensive it’s what somehow makes it less racist.
Exactly. Racism has nothing to do with what people say or do, it’s a sort of vibe, so really there is no way of telling if anything or anyone is Real Racist versus fake racist. It is important to point this out b
A picture is worth a thousand words. Me saying your mom is so fat that _______ in the lunchroom is different than me saying your mom is so fat in cinematic video format that can go locally viral (your whole school). This is the first time in my life I'm going to say this is not a history is echoing situation. This is a we have entirely gone to the next level, forget what you think you know.
In all seriousness, nobody is concerned about capability. Everything was always capable if you're rich enough and you have enough time. But scale matters. That's why the printing press literally created new religions.
If you refuse to distinguish between someone who genuinely believes in concept of a race, or postulates an inherent supremacy of some particular set of biological and/or sociocultural traits, and someone who merely talks edgy shit they heard somewhere and haven’t given it much thought - then I’m not entirely sure how can I persuade you to see the distinction I do.
But I believe this difference exists and is important because different causes require different approaches. Online trolls, engagement farmers, and bonehead racists are (somewhat overlapping but generally) different kind of people. And any of those can post racist content.
I think the harm done by circulating racist media is "real" racism regardless of whether someone is doing it because they have hateful ideology, are profiting for it, or just having a good time.
I'm getting tired of people letting politics seep into the discussions on this website.
> For Content Thieves (Warning)
> If you are currently using Body Cam Declassified content without [...]
> You are in violation of copyright law and will be subject to legal action
[...]
> We aggressively pursue legal remedies against content theft, including statutory damages of up to $150,000 per infringement under U.S. [...]
> An additional administrative fee of $2,500 per infringing video will be assessed
> We demand all revenue generated from the unauthorized use of our content
> We maintain relationships with copyright attorneys who specialize in digital media infringement
> We recommend removing the infringing content immediately and contacting us regarding settlement options
A paragraph about the videos being fake is still there.
> While actual government-produced bodycam footage may have different copyright considerations and may be subject to broader fair use provisions in some contexts, our content is NOT actual bodycam footage.
> Our videos represent original creative works that we script, film, edit, and produce ourselves.
> As privately created content (not government-produced public records), our videos are fully protected by copyright law and are NOT subject to the same fair use allowances that might apply to actual police bodycam
> The distinction means our content receives full copyright protection as creative works, similar to any other professionally produced video content.
This reminds me of a non-AI content mill business strategy that has been metastasizing for years. People who film homeless people and drug addicts and make whole Insta and Youtube channels monetizing it, either framed at "REAL rough footage from city XY" or even openly mocking helpless people. The latter seems to be more common on TikTok and I'm not watching "original" videos of such shite.
There is a special place in hell for people who do such things and in my opinion, there should be laws with very harsh punishments for the people that "create" this trash and make money from it. When it's about the filming of real people without their consent, we really need some laws that effectively allow to punish people who do this, because the victims are not likely to defend themselves.
And in total, the whole strategy is to worsen societal division and tensions, and feed bad human instincts (voyeurism, superiority complex) in order to funnel money into the pockets of parasites without ethics.
No offense meant, but unless you know of an experiment that indicated an absence of statistically significant effect of education programs on collective behaviors; especially one that established a causality like you stated, I would dare to suspect that it's not an accurate portrayal of things, but more of an emotionally driven but not entirely factual response.
> mandatory identity verification like in Korea, i.e attaching real consequences to what happens in the internet
I'm not sure I understand the idea. Is it about making it easier for law enforcement to identify authors of online posts, or about real-name policies and peer pressure, or, possibly, something else?
That sounds like an abstinence-type approach. Not saying that it's not a valid option (and it can be the only effective option in case of a severe addiction), but it's certainly not the only way that could work. Put simply, you don't have to give up on modern technology just because they pose some dangers (but you totally can, if you want to, of course).
I can personally vouch for just remembering to ask myself "what I'm currently doing, how I'm feeling right now, and what do I want?" when I notice I'm mindlessly scrolling some online feeds. Just realizing that I'm bored so much I'm willing to figuratively dumpster-dive in hope of stumbling upon something interesting (and there's nothing fundamentally wrong with this, but I must be aware that this interesting thing will be very brief by design, so unless I'm just looking for inspiration and then moving somewhere else, I'm not really doing anything to alleviate my boredom) can be quite empowering. ;-)
> all the social networking on smart phones has not been a net positive in most people's lives
Why do you think so? I'm not disagreeing, but asking because I know plenty of individual examples, but I'm personally not feeling comfortable enough to make it generalization (because it's hard) and wonder what makes you do.
There can be a follow-on discussion about what, if any, benefits are also provided by aforesaid technology
> But then if you go to their site, https://bodycamdeclassified.com/, which is focused on threatening people who steal their IP
I was on the go and not reading properly, sorry
I keep trying to explain that no, it’s not real racism because if you can imagine that it’s not real, it must not be real but then he says “Who made you the arbiter of racism?” and “What purpose on God’s Green Earth does it serve anyone, in any context, to chime in unprompted that you choose to sort racism into real and fake piles? Like what do you get out of that?”
Anyway I explained that it’s fake racism because it’s just somebody that wants attention and he said “racists can want attention too” and “seems like you’re just doing gymnastics to invent excuses for people online that you don’t even know why are you doing that” so I don’t know what to tell him. I don’t think we’ll see eye to eye on this because he incorrectly defines racism as a “real phenomenon” that “affects real people” and is “perpetuated by people’s actions”, whereas I know that what he’s describing is fake racism, because real racism is a little thing people feel in their hearts.
Seems like anybody could plainly see that fake racism is when people say or do real racist things in the world and real racism is intangible, not really strictly “real”, but the guy’s a kook so ¯_(ツ)_/¯