The easiest for someone here to see is probably code generation. You can point at parts of it and go "this part is from a high-school level tutorial", "this looks like it was grabbed from college assignments", and "this is following 'clean code' rules in silly places"(like assuming a vector might need to be Nd, instead of just 3D).
If they could make it elsewhere, they would.
I don’t expect this to be a popular take here, and most replies will be NAXALT fallacies, but in aggregate it’s the truth. Sorry, your retired CEO physics teacher who you loved was not a representative sample.
Hey, he was Microsoft’s patent attorney who retired to teach calculus!
Anyone who's been around AI generated content for more than five minutes can tell you what's legitimate and what isn't.
For example this: https://www.maersk.com/logistics-explained/transportation-an... is obviously an AI article.
to some degree of accuracy.
We don't do calculations: computers do it for us.
We don't accumulate knowledge: we trust Google to give us the information when needed.
Everything in a small package everyone can wear all day long. We're at the second step of transhumanism.
https://www.swissinfo.ch/eng/society/swiss-salaries-teachers...
A. The Pathfinder and The Deerslayer stand at the head of Cooper's novels as artistic creations. There are others of his works which contain parts as perfect as are to be found in these, and scenes even more thrilling. Not one can be compared with either of them as a finished whole. The defects in both of these tales are comparatively slight. They were pure works of art.
B. The Pathfinder and The Deerslayer stand at the head of Cooper's novels as artistic creations. There are others of his works which contain parts as perfect as are to be found in these, and scenes even more thrilling. Not one can be compared with either of them as a finished whole. The defects in both of these tales are comparatively slight. They were pure works of art.
I mean I'm skeptical about AI as well and don't like it, but I can see it becoming a force multiplier itself.
I got AI answer saying ‘no’, but actually you do.
If I use a calculator it will be correct. If I open encyclopaedia it will mostly be correct, because someone with a brain did at least 5 minutes of thining.
We are not talking about some minor detail, AI makes colossal errors with great confidence and conviction.
Posters here love to bring out this argument, but I think a major weakness is that those people wound up being right. People don't memorize things any more! I don't think it's fair to hold out as an example of fears which didn't come to pass, as they very much did come to pass.
I would question the utility of engaging.
GPS is great at knowing where you are, but directions are much much harder, and the extra difficulty is why the first version of Apple Maps was widely ridiculed.
Even now, I find it's a mistake to just assume Google Maps can direct me around Berlin public transport better than my own local knowledge — sometimes it can, sometimes it can't.
(But yes, a single original Pi Zero beats all humans combined at arithmetic even if all of us were at the level of the world record holder).
Moreover, women never needed to start out as teachers to "be ready for childcare". The childcare expectations were much lower at the time, but amount of chores at home massively higher.
To be honest, we do for most things: I have not checked the speed of light. And I surely would not be able to implement a way to measure it from only my observations and experience.
"and then a bunch" is somewhat misleading. They in fact take easier and fewer classes in the subjects that they are studying for, but they have to take extra classes on education, which afaik are not that hard to pass. Getting a "Lehramt" degree is much easier than getting the regular degree in a subject, which is why many people that are simply not good enough for the real thing do it.
Also we have a teacher shortage and more and more teachers are not in fact people that received an education you usually have to get as a teacher, but are just regular people with either a degree in the subject they are teaching or a degree in almost anything (depends on how desperate the schools are and what subjects they are hiring for).
https://www.gutenberg.org/files/3172/3172-h/3172-h.htm#:~:te....
I think it is, however the dream among educators of an “AI detector” is so strong that they’re willing to believe “these guys are the ones that cracked the problem” over and over, when it’s not entirely true. They try it out themselves with some simple attempts and find that it mostly works and conclude the company’s claims are true. The problem though is that their tests are all trying to pass off AI-generated work as human-generated—not the other way around. Since these tools have a non-zero false positive rate, there will always exist some poor kid who slaved away on a 20-page term paper for weeks that gets popped for using AI. That kid has no recourse, no appeals—the school spent a lot of money on the AI detector, and you better believe that it’s right.
This is what will eventually happen. Some component or provider deep in the stack will provide some answer and organizations will be sufficiently shrouded from hard decisions and be able to easily point to "the system."
This happens all the time in the US. Addresses are changed randomly because some address verification system feedback was accepted w/o account owner approval -- call customer service and they say "the system said that your address isnt right", as if the system knows where i've been living for the past 5yrs better than me, better than the DMV, better than the deed on my house. If the error rate is low enough, people just accept it in the US.
Then, it gets worse. Perhaps the error rate isnt low, just that it is high for a sub-group. Then you get to see how you rank in society. Ask brown people in 2003-2006 how fun it was to fly. If you have the wrong last name and zipcode combo in NYC suddenly you arent allowed to rent citibikes despite it operating on public land.
The same will happen with this, unless there is some massive ACLU lawsuit which exposes and the damages will continue until there is a resolution. Quite possibly subtle features on language style will get used as triggers, probably unknowingly. People in the "in-group" who arent exposed will claim it is a fair system while others will be forced to defend themselves and have to provide the burden of proof on a blackbox.
After I moved here and learned the system, I realised it had on my first trip directed me through a series of unnecessary train routes for a 5 minute walk.
Last summer, when trying to find a specific named cafe a friend was at, Google Maps tried to have me walk 5 minutes to the train station behind me to catch the train to the stop in front of me to walk back to… the other side of the street because I hadn't recognised the sign.
It's a great tool, fantastic even, but it still doesn't beat local knowledge. And very occasionally, invisibly unless you hit the edge, the map isn't correctly joined at the nodes and you can spot the mistake even as a first time visitor.
I suspect there is a product opportunity here. It could be as simple as a chrome extension that records your sessions in google docs and generates a timelapse of your writing process. That’s the kind of thing that’s hard to fake and could convince an accuser that you really did write the essay. At the very least it could be useful insurance in case you’re accused.
Because if you're putting forth the assertion "If they could make it elsewhere, they would." you've certainly had spent sometime teaching, yes?
I think it would be good to understand how much experience teaching it took for you to come to that conclusion.