Hacker News new | past | comments | ask | show | jobs | submit login
ChatGPT provides false information about people, and OpenAI can't correct it (noyb.eu)
62 points by skilled 16 days ago | hide | past | favorite | 87 comments



Every single time...

https://hachyderm.io/@inthehands/112006855076082650

> You might be surprised to learn that I actually think LLMs have the potential to be not only fun but genuinely useful. “Show me some bullshit that would be typical in this context” can be a genuinely helpful question to have answered, in code and in natural language — for brainstorming, for seeing common conventions in an unfamiliar context, for having something crappy to react to.

> Alas, that does not remotely resemble how people are pitching this technology.


Nobody put what I think about AI better than this.

Yes, there are good cases where "realistically sounding bullshit" is useful! I remember in the early days, before everyone grew kind of tired (well, at least me) of GPT, people used "hehe write me PC manual in style of slam poetry" or whatever, and that is fun.

However, people use it to get facts!

Just a few days ago, I have read somewhere that hospitals are planning to use LLMs to replace nurses. That is horrifying. Absolutely horrifying.


People use it to get facts, and trust it more than Google. I have some tech illiterate boss that asked me to do stuff some way because "ChatGPT said so", instead of trusting me, an experience professional. It wasn't like this with a Google search, so why now ? Natural language has a big impact on how the product is perceived


We've seen numerous stories at this point about lawyers trusting AI to generate case documents that turned out to have false cites - AI generated scientific papers are being published. Doctors are using AI. Law enforcement is using AI. Everyone is using it and a lot of people are using it with the assumption that it's intelligent and factual. That it works like the computer from Star Trek. People on this very forum who should know better have said they trust AI more than they trust themselves and more than other people.

AI probably has a niche where it's useful, but because it smells like a magic money machine that will allow managers to replace employees and create value from essentially nothing, modern capitalism dictates we must optimize our entire economy around it, no holds barred, damn the torpedos and full speed ahead because "money." I just hope the fever breaks before people start getting killed.


Note that there’s a big difference between AI and LLMs. There are plenty reliable techniques (by for example providing confidence estimations) in the AI toolbox. It’s just that LLMs aren’t one of them.


It turns out in a lot of (low skilled) knowledge work the nonsense that AI spits out is superior to humans.


We all knew a guy who would never stop talking. Maybe in college.

Some people thought he must be very smart, and would listen to that guy for hours on end. Most people eventually got annoyed / board and left. But he kept going. The thing was, he knew a lot of stuff. But of course, an audience is a hell of a drug, he had to keep going, and some of the stuff he said ended up being bullshit.

Now LLMs are that guy. We have automated "Cory from the 3rd floor, after a few beers". You wouldn't cite Cory from the 3rd floor on your term paper, why would you cite an LLM?


People have a fundamental misunderstanding of what LLMs are.

I’ve had to explain this so many times even to engineers. People keep using it as Google. It is not a mechanism to retrieve facts.

It’s rather a reasoning mechanism, that when used like a search engine generates text that looks like output from a retrieval of facts.

The article talks about OpenAI being unwilling to correct errors. But they just can’t. There just aren’t facts like birthdays for specific people discernible from the weights. Maybe for some.

So what would they correct? The best that can be done is like it is said in the article, apply filtering to refuse to answer.


I don't know, it seems recently that we've detached what things are from what the people behind them say things are, so bing AI is an LLM, phind is an LLM, and they market themselves as search engine, like to see in details you say

>> It is not a mechanism to retrieve facts

Then bing AI powered by chatgpt shows on its site

>> [Hello, this is Bing! I’m the new AI-powered chat mode of Microsoft Bing that] can help you quickly get information.

if get is a synonym of retrieve and facts is a synonym of information, I'd argue that maybe it's not people misunderstanding what LLMs are but maybe someone explaining it wrong?

Guess in the world of marketing words don't carry any value anymore, then I agree with you


Bing is not just an LLM. It’s RAG, the LLM is just a layer on top of a typical search engine. The LLM is not “getting” any facts, it’s just synthesizing them in a human-readable format.


I can see how people are confused by this though. Bing presents things it finds as facts, without really making it clear that they're just a synthesis of things it found on websites limited by it's search algorithm.

So if I ask Bing about me it says "Rory McCune is a Cloud Native Security Advocate at Aqua Security." without any ref.

The problem is, that's not correct, that's a job I had two years ago, but someone reading that could be forgiven for thinking that's a fact, given how it was presented.

In this case that's harmless, but I could easily see cases where it would not be harmless.


LLMs are also often used in the search component of RAG, by generating embeddings that are then indexed and searched.

(I don’t know if that’s how Bing AI works)


How is LLM+RAG any different from a fine-tuned LLM without RAG? They're both trained on data, and have the capability to hallucinate.


I believe what the parent comment is trying to imply is that the search results are fetched/retrieved from Bing's own internal ranking vector(?) database and then passed to the LLM, which then converts the received documents into a more human readable format and fills in any missing gaps in the information with it's own data.

So the gaps are the only areas where the LLM can hallucinate on and if your search query is easily available information on the internet, then hallucinations will be less or none.

Edit: I have used RAG with a project that I am working on and it's quite hard to ascertain if the LLM used the information provided as part of the RAG documents or just made up information on it's own, since even without RAG, we were getting similar responses 7 times out of 10.


Theoretically some people think RAG sounds more feasible to make factually accurate.

After all, if you've trained an LLM on a masses of unchecked data you've scraped from the internet, your training data probably includes "Joe Biden is the president" and "Donald Trump is the president" and "Barrack Obama is the president" and "Emmanuel Macron est le président" and so on. It would be understandable if an LLM was confused about who the president was.

These people think handing an LLM the contents of https://en.wikipedia.org/wiki/President_of_the_United_States then asking who the president is sounds a lot more feasible.

Personally I'm not so sure - I've never seen a RAG implementation that impressed me.


The point is that it should be able to point you to the source article. The synthesis isn’t necessarily more accurate, but you can check its work easily.


> So what would they correct? The best that can be done is like it is said in the article, apply filtering to refuse to answer.

Stop providing service in the region in which your product is unable to comply with local regulations?

You can't extract millions of euros from EU citizens with anything physical that goes against the EU law, but we like to pretend that because something's digital, it's totally fine if your product intentionally breaks the law, even if it's just for a couple of years until some random NGO sues you and courts react. I think that's nonsense.

I'm not gonna say the EU needs its own equivalent of the Great Firewall, but there should be some cost of being intentionally non-compliant, as in fines, as in the same thing the EU already does to Facebooks and TikToks of the world.


OpenAI openly and knowingly, and Microsoft even more, presents ChatGPT as a way to get facts.


Now what I think would be interesting if one can use ChatGPT or whatever to distill any kind of document into a series of factoids. Say you give it the Wikipedia article about a famous person and it will extract stuff like "born: 1949-01-30" and associate it with the name of the person.

Later on, a user asks an AI "when was Foo Bar born?", and the AI then looks up the factoid database and responds with the correct factoid, or an error message.


Note that a lot of the information on Wikipedia is available as a structured knowledge graph already: https://wikidata.org/wiki/Wikidata:Main_Page


For Wikipedia, yes, but I only used that as an example. There's tons of stuff one could build factoids from.


> It’s rather a reasoning mechanism

I wouldn’t call it reasoning. To me that implies using logic and being objective. It’s more wisdom/stupidity of the crowds, with the model designer deciding what crowd to use to create the model and then tweaking things to make the model look like it’s reasoning.


>People have a fundamental misunderstanding of what LLMs are.

It's labeled "AI", aka "artificial intelligence".

So clearly, it's intelligent enough to tell fact from fiction, right? It's intelligent enough to read and repeat facts, right? It's artificial intelligence, isn't it?

I'm playing Devil's Advocate here, obviously. I'm aware how this actually works like you, but the way it's billed you can't expect the commons to understand any of this in any other way. There is a very specific concept of what AI is among the masses, regardless if "AI" is it or not.

If you call something a duck, you can't blame someone for saying it's a duck.


Even real intelligence struggles to tell facts from fiction, otherwise the world would look very different. That's yet another problem with AI, people often aren't sure what they can expect but many err on the side of "magical oracle".


> The article talks about OpenAI being unwilling to correct errors. But they just can’t.

You surely understand how this isn't a compelling argument at all, right?

If you have a bug in your X-Ray software and it irradiates patients but you say you don't have the knowledge or resources to fix it, it doesn't suddenly become a way out of fixing it.

This will be a slam dunk of a case. There are many obvious paths that OpenAI can follow, such as stopping business in the EU or preventing it from giving any fact about any person.


The organic neural network inside my skull isn't a mechanism to retrieve facts either, nor are specific outputs that it may provide discernible from the weights. And yet, if it responds with a factual error to a given prompt, it can be trivially corrected so as to provide the right answer for future prompts.


Bold of you to assume 'Open' AI is accountable to anything other than printing more money f-for h-humanity of course.


Is Gemini that unpopular? It seems that nobody in this thread is aware of its features.

The best that can be done is just what Gemini does: a button to (smartly) compare the generated text against google search results.


> The article talks about OpenAI being unwilling to correct errors. But they just can’t.

There are actually several algorithms intended to allow fact editing in LLMs: https://github.com/zjunlp/EasyEdit?tab=readme-ov-file#curren...

They don't work perfectly (e.g. "Tim Cook is CEO of Apple" and "The CEO of Apple is Tim Cook" for some reason have to be edited separately) but they can deal with the most egregious cases. And, you know, maybe OpenAI can improve them further, given they're always going on about 'safety' as a reason their competitors should be regulated out of existence.


I wonder if my insight on these is wrong, but from what I understand, the GPTs and such are basically all hallucinations, and it's just that most of these are also correlated to reality, what actually exists.


I find that explaining errors and factual mistakes in LLMs or AI in general as hallucinations to be counter-productive. I don't know if the field has landed on a specific definition of what hallucination is, but I mostly associate it with garbled output from tokens like 'SolidGoldMagicKarp'[0] and 'davidjl'[1].

My understanding is that AI models like GPT have been able to convincingly convey the form of language, but not its meaning. It looks and sounds like how a human would communicate, but the AI is unable to imbue meaning into the words and sentences it produces. My knowledge of this comes from Lex Friedman's episode with Edward Gibson [2].

I think that's the fundamental issue with LLMs at the moment. It has managed to mostly exit the uncanny valley because, as far as most people are concerned, the text produced could just as well have been made by a human. I think this lends some credibility to the text produced by the LLM, because more or less all text ever produced has had a human behind it. This is no longer the case and as such there is bound to be a transitional period where we learn how to deal with this new technology.

[0]: https://www.lesswrong.com/posts/aPeJE8bSo6rAFoLqg/solidgoldm...

[1]: https://twitter.com/goodside/status/1666598580319035392

[2]: https://youtube.com/watch?v=F3Jd9GI6XqE&t=230


Expressing it that way removes a useful word from our vocab; when people say "halicinations" in this context it specifically means that the generated response does not match reality. If you say that it's just a coincidence when it's correct, and that it's hallicinations all the way down, how do you communicate succinctly to a human that AI responses are not necessarily bound in reality? (I mean in the specific case, not the general case).


Well, I might be taking a pessimistic view on it, but if you expressed it this way to the average user, they might pay more attention to what's actually there when an answer returns to them. I think they have in mind that the text is (generally) grounded in reality like how a human mind grounds things, and this is definitely a hurdle in understanding how these LLM machines work.


You just described the human mind. What we see in our heads is wholly imagined, mostly correlated to reality. You see things clearly, without the huge empty spot in the middle or fuzzy black-and-white periphery. And usually without hallucinations, where our predictive minds miscalculates and adds in things to our imagined interior world that don't exist in reality.


The 'hallucination' term is an unfortunate bit of anthropomorphisation. It's a machine for generating plausible text; sometimes the text is true, accidentally. This bears no particular resemblance to any conventional meaning of 'hallucination'.


That is a matter of perspective. Generally hallucinations are considered to be the output that is not correlated to reality. You are correct that the same method is used for all output. If that were not the case then removing hallucinations would be trivial.


It's whatever is in the training data, plus some hacks. For example, if I ask ChatGPT when my birthday is it says "I don't have personal information on individuals"; but if I ask when King Charles III birthday is it tells me a date... which matches what's in Wikipedia... so it might be right. If there are lots of instances in the training data where the date is wrong, then it will just repeat that wrong date back to you as fact.


It might be able to produce true data about very famous individuals, and it might refuse to provide information about unknown individuals. But if you ask about a YouTuber, or a smaller star/celebrity, it is more likely to produce a false statement. I asked about the birthday of Tom Scott (the YouTuber) three times and got three different dates (and none supported by a Google search).


Yes, I agree. That's what I was trying to say. It also "doubles down" when called out: I asked about the mathematician Richard Taylor and it responded with a paragraph that called him Sir Richard Lawrence Taylor, so I asked when he was knighted (he wasn't) and it said "The 2014 birthday honours list" which is wrong... probably because there was a Dr Richard Thomas Taylor awarded an MBE (not a knighthood) in that list.


ChatGPT will confidently fabricate data out of thin air, it is less likely to admit it does not know something or is not sure. And that’s the primary problem with it: the statement is believable, but incorrect.


Glad that people are realizing the ChatGPT/AI hype that is driving all companies nuts by creating unnecessary peer pressure to integrate some kind of GenAI in their products even if it doesn’t make sense.


They've always been around, they've just been dismissed as "Luddites" who fear an inevitable future. But of course, the Luddites are always proven correct in hindsight.


when has your claim "the Luddites are always proven correct in hindsight." been true?


Always. And every now and then HN realizes it, then promptly forgets.

Contrary to popular propaganda, the original Luddites weren't opposed to technology, they were opposed to the effect of technological progress on the working class. They knew that automation and mass production would be used to devalue labor, flood the market with inferior products, and that all of the benefits and profit from the industrial revolution would go to the corporations, at the expense of their quality of life. And they were correct.

And modern day "Luddites" were correct about the centralization and commoditization of the web, social media, crypto and NFTs, Elon Musk and autonomous vehicles, and will be proven right about AI. Tech has nothing left to offer but grift upon grift.

https://www.newyorker.com/books/page-turner/rethinking-the-l...

https://news.ycombinator.com/item?id=37664682

https://librarianshipwreck.wordpress.com/2018/01/18/why-the-...

https://news.ycombinator.com/item?id=17053994


Didn’t the media cover this already over and over since ChatGPT’s inception? Not that it isn’t important but to imply it is a new revelation seems sensationalist.

> While inaccurate information may be tolerable when a student uses ChatGPT to help him with their homework, it is unacceptable when it comes to information about individuals.

I don’t even understand this part. Why would inaccurate information be at all acceptable for homework? You need accurate information there just as much as with people. In fact there’s probably a good deal of overlap.

This article is bad.


> Didn’t the media cover this already over and over since ChatGPT’s inception?

This is not “the media”. noyb isn’t reporting on what other people did, they are informing us of what they just did.

> Not that it isn’t important but to imply it is a new revelation seems sensationalist.

The article isn’t about the failings of ChatGPT, it’s about a specific legal complaint being made.

> Why would inaccurate information be at all acceptable for homework?

It’s acceptable in a legal sense, you get a bad grade and that’s that. But falsehoods about a particular individual on a popular resource could be quite damaging and are against EU law.

> This article is bad.

Rather, you completely misunderstood it.


I mean, yeah I was having a tough time understanding it. I felt like my comment conveyed my confusion pretty well. I still think the wording is strange and the most clarifying thing you’ve provided is that this guy is a blogger and his target audience is far less broad than HN encapsulates.

Sorry if you took offense to my saying the article was bad. I regret saying that now, it was unnecessary.


> this guy is a blogger and his target audience is far less broad than HN encapsulates.

It’s not one blogger, noyb (stands for “None Of Your Business”) is a non-profit focused on protecting privacy rights in the EU.

https://en.wikipedia.org/wiki/NOYB

> I regret saying that now, it was unnecessary.

Thank you for saying that. Especially on the internet where we all have the compulsion to double down, I believe those types of admissions take guts and should be celebrated and normalised.


IMHO this is not a fixable problem, it's not even a new problem because false information existed since ever. With the Web it become globally available, with social media it become globally spreadable and with "AI" it has become of endless supply in form and variety.

Humans need to develop humane defense mechanisms for the new reality, tech cannot be stopped and cannot be hermetically insulated against mistakes or bad actors.


Why is it "not a fixable problem"? So many problems introduced by new technologies have been claimed to be "non fixable" by the businesses promoting them until, lo and behold, the law has forced them to do so.


Was lying online fixed? The attempts to fix it come at great costs, which is usually suppression of speech which leads to all kind of problems associated with lack of information or feeding faulty information.

Attempts fixing hate speech on social media resulted in similar situation, now hate becoming mainstream. American social media banned keywords for racism or hate speech, only to push them to dog whistle racism and hate speech.

The problem about fixing lie is that its impossible to objectively decide what is a lie and this is true for all kind of stuff about people.


Yes, it was.


See, one lie just sneaked in.


You're totally right.We can't just hit the brakes on technology. It's here to stay, with its flaws and perks. As users, we need to be savvy about what we see online, questioning stuff instead of just swallowing it whole. Education is key here. The more we can help people navigate digital sources of information, the better equipped they'll be to identify misinformation.


Why do all of your comments look like ChatGPT wrote them?


Am I wrong or LLMs are text not fact generators ?


"We" ask them to generate text that includes the facts.


Popcorn time: NOYB (Max Schrems) filed a complaint against OpenAI with the Austrian DPA: ChatGPT is not GDPR compliant.


Sounds like a win for the US...get competing economies to block the technology for trivial reasons, then by time the bugs are worked out they will be so far behind that their only choice will be to choose US-based solutions


> trivial reasons

Come again?


You're suffering from ChatGPT Derangement Syndrome.


Yes; the US will lead the world in plausible-looking bullshit generation.


ChatGPT is not a magical database that provides correct answers to all questions. It is an LLM that draws conclusions and propositions from the context you offer. Instead of just asking a question and waiting for an answer, you can provide detailed context and engage in a discussion about it.


I like chatgpt a little more after reading this.


We here in Germany, as so other countries, have a issue with Wikipedia and their actions against a certain clientele.

If you get branded a conspiracy theorist your Wikipedia entry will denounce you as such and the admins and moderators will lock the article and you can do nothing about it.

Obviously ChatGPT will take information about your persona from Wikipedia if available and will update this information accordingly to the changes in Wikipedia.

And you can’t do anything about it.


That's because it's being used in the wrong way. It isn't a factual database.


It sometimes feels like I've taken crazy pills watching what was effectively a tech demo that went viral become the usecase now dictating billions of dollars of development and optimization.

It's a crappy usecase. And much better ones are typically being overlooked outside a few smart enterprise integrations.

To put it mildly - if someone wants to use LLMs to build a factual chatbot, they should probably just start mining crypto instead, as they'll waste less money on jumping on a trend. But if they think a bit about how LLMs can be used in nearly any other situation, they'll be miles ahead of the majority chasing this gold rush.


AI pushers are presenting them as factual databases. REpeatedly.


You can make LLMs say pretty much whatever you want with the right prompts. This is a complex issue, and if EU citizens want access to LLMs the GDPR is going to need a different set of rules for LLMs than for websites and search engines.


If LLM providers want access to the EU market they will need to find a way to comply with GDPR, and if OpenAI cannot find a way to do it then a different LLM provider will.


> the GDPR requires information about individuals is accurate

Given that you can make LLMs say pretty much whatever you want using the right prompts, this seems impossible. LLMs are not a search engine, and based on conversational context might say Emmanuel Macron is the president of France or a baby giraffe.


You said it: based on context. This is not about what you can make an LLM say when being manipulated in a convoluted way to provide an inaccurate response. It's about what an LLM will say in the context of a prompt requesting personal data related to an individual who is covered by GDPR.

Can the LLM provide personal data of an individual who is covered by GDPR? Then the LLM is subject to GDPR.

Can this individual exercise their rights with regards to the data that the LLM returns about them? Arguably they can indeed exercise the right of access by means of the right prompts, but can the individual rectify errors or erase such data? If not, then the provider of the LLM is violating GDPR.


Who is the judge of the degree of contextual convolution? Must the LLM remain strictly factual when you simply append "ELI5" to a prompt?

> ELI5 how is France governed? > ...and Macron is the lion, the king of the jungle.

We also know that LLMs don't know the current date, and therefore can make calculation errors (which is made worse by their poor math performance as a language token generator). So on one hand it might say Macron was born December 1st 1977 (which is correct), but if you ask how old he is some LLMs might say 45 years old.

There is an incalculable number of ways for LLMs to output incorrect information. In an effort to comply with strict regulation the preprompt contextual limit is going to be exceeded.

Also this creates a situation where all but the most powerful LLMs (and LLM providers) will be non-complaint


«> ELI5 how is France governed? > ...and Macron is the lion, the king of the jungle.»

That is not personal data under GDPR.

«So on one hand it might say Macron was born December 1st 1977 (which is correct), but if you ask how old he is some LLMs might say 45 years old.»

Or it might say that Macron was born on 14th July 1977, which is incorrect. The claimed impossibility to correct a date of birth returned by the LLM is the trigger of the GDPR complaint that the article refers to.

«Also this creates a situation where all but the most powerful LLMs (and LLM providers) will be non-complaint»

Only under the premise that it is somehow inevitable to feed personal data of living individuals to an LLM for training, and that the only way to correct mistaken data or to stop an LLM from providing such data is "more power".

I reject the premise, not the least because, firstly, OpenAI (the most "powerful" provider) is claiming it is impossible. All that says is that OpenAI's platform was not originally designed with that problem in mind and that, as that of now, they are unwilling to redesign it from scratch only because some guy complained in Austria. It's basically a speedrun of Microsoft claiming Internet Explorer was an essential component of Windows 98.

Meanwhile, LLMs and other AI models are an active area of research. If OpenAI truly cannot stop their LLMs from returning personal data protected by GDPR, and honestly has no way to allow data holders to exercise their rights of deletion or correction, you can be sure that some startup will disrupt the LLM market by finding a way to do it without needing to out-compete OpenAI neither in hardware nor on training corpus size.


> you can be sure that some startup will disrupt the LLM market by finding a way to do it

Indeed if a startup can find a way to scrub PII of living people from 20 billion pages of text (and prevent LLMs from ever hallucinating) they would be quite a valuable company, in the LLM dev space and numerous other ventures. Until then the EU might have to go without access to language models.


Meh. GDPR sets limits on 'processing' personally identifiable information. In the context of an LLM, its outputs may contain PII if its inputs do. Those inputs are training input and prompts. So long(!) as the training input doesn't have PII, the output will only have it if the prompts do. Same as if you save a file on onedrive, if you save PII there, you're the data controller, and Microsoft is a processor on your behalf.


It is impossible to remove personal data ("any information which are related to an identified or identifiable natural person") from the LLM training data.

As far as I understand it ChatGPT and all other similar systems are blatantly violating GDPR, they would have to for example publish their related training data to conform.

I guess the EU authorities don't do anything for now because they don't want to admit that their funny law basically bans all state-of-the-art AI.

(Ok, Openai also broke the law in almost all countries by downloading shadow libraries, but here they at least have more plausible deniability.)


Does the GDPR require information that is not asserted to be factual to actually be factual?

If I have a random number generator producing arbitrary strings, am I required to ensure that the strings do not contain untrue statements about individuals?


When people treat the information as factual and the company doesn't do enough to clarify that, then yes.


So you would agree that rather than this being a case of a definite violation of GDPR rules this would entirely hinge on what would constitute enough?

The fact that LLMs hallucinate is certainly no secret, even the linked article says OpenAI openly admits that they can't avoid it right now.

What would constitute enough?

Would it be enough to place a statement placed onscreen at the start of every conversation to say that information may not be accurate and that if the information was significant then it should be independently verified?


This seems so backwards to me. As a user of LLMs, it's clear to me that tokens generated by ChatGPT and similar are not to be interpreted as a legally-valid statement by the company making the LLMs, unless explicitly stated by said company. I certainly don't engage with them that way. I believe this to be fairly obvious to anyone who has used such tools, so I just see this as an opportunity to sue for a quick buck.


NOYB doesn't make money out of suing. I otherwise agree with the rest of your comment.


I just looked into what noyb is, thank you for pointing this out. Perhaps this is something that had to be brought to court at one point or another, so we can set a precedent one way or the other, then. At the moment I think I'm hoping nothing comes out of this.


Yeah, me too, and I have a recurring donation to NOYB, as they generally do fantastic work. I think this one is a rare miss.


> I believe this to be fairly obvious to anyone who has used such tools

It is not. Not even people in tech understand this, let alone non-technical people. These tools are being marketed as a way to get factual information. Don’t let your knowledge of the technology blind you to the fact that people outside your circle don’t know what you do.


Are the statements by the neighbourhood drunk in the corner pub commonly interpreted as legally valid statements by the individual making them? Nobody takes the drunk seriously, yet the neighbourhood drunk shouldn't be treated more seriously with respect to slander than a large corporation with respect to libel.


<sarcasm>noyb.eu provides false information about ChatGPT, will noyb.eu correct it?</sarcasm>




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: