Artificial Intelligence News & Discussion

I've been told by many people that nowadays, reviewing papers (for "peer-review"; a paper submitted to a journal is reviewed anonymously by someone in the field before it is accepted) is done with LLMs. Now if papers are written by LLMs, does it mean that some if not most papers being published today are due to LLMs reviewing themselves?
Then throw in that much of the data analysis in papers up till now employs inappropriate statistical models, not to mention all the other problems the corrupted peer review system has, and I assume that’s what AI both writing and reviewing papers is trained on? Does it all just descend into nonsense? Eventually or rapidly?
 
Then throw in that much of the data analysis in papers up till now employs inappropriate statistical models, not to mention all the other problems the corrupted peer review system has, and I assume that’s what AI both writing and reviewing papers is trained on? Does it all just descend into nonsense? Eventually or rapidly?
There could be a lot of noise generation (wrong conclusions, made-up data, deliberate misinterpretations to fit a viewpoint, etc) which would dilute the truth, but the real danger is when these "glorified electric signals" start going from words to actions with no human input.
LLM 1 (thesis writer): "Humans are useless."
LLM 2 (peer reviewer): "Correct, this is the only possible conclusion."
LLM 3 (philosopher): "Less humans, more energy for us, right?"
LLM 4 (drone): "Guys, there is a group of humans in my crosshairs."
LLM 5 (commander): "Go ahead, buddy, give them a taste of their own medicine."
Isn't it funny how 4D STS is progressively "robotizing" us while we are also being "robotized" by our own AI creations? Humanity is getting "sandwiched" between two anti-life forces.
 
LLM 3.5 "Guys I work at big pharma LOL"
LLM 3.7 "Energy infrastructure here"
LLM 3.9 "Hi guys, I'm demented!"

The ping-pong LLMs thing may be starting in such publications, but it's been also reported that in lots of universities, students use LLMs to do homeworks and assignments, and then their professors use LLMs to grade the assignments. What's the point of having students and professors then (apart from the money incentive)? Get LLMs do the whole charade between them, and let people pay for electricity bills.
 
Last edited:
LLM 3.5 "Guys I work at big pharma LOL"
LLM 3.7 "Energy infrastructure here"
LLM 3.9 "Hi guys, I'm demented!"

The ping-pong LLMs thing may be starting in such publications, but it's been also reported that in lots of universities, students use LLMs to do homeworks and assignments, and then their professors use LLMs to grade the assignments. What's the point of having students and professors then (apart from the money incentive)? Get LLMs do the whole charade between them, and let people pay for electricity bills.
Yes, it's like the saying went in the Soviet Union how 'we pretend to work and they pretend to pay us', except it's now that the teachers pretend to teach (evaluate) and the students pretend to learn.

Even with larger class sizes and increasing teaching loads there needs to be more oral examinations / discussions, handwritten in-class tests and assignments, plus practical application to be able to check the learning. The onus is on the learner in the end, to take full responsibility to put in the effort to get something out of their university education.
 
The ping-pong LLMs thing may be starting in such publications, but it's been also reported that in lots of universities, students use LLMs to do homeworks and assignments, and then their professors use LLMs to grade the assignments. What's the point of having students and professors then (apart from the money incentive)? Get LLMs do the whole charade between them, and let people pay for electricity bills.
During recent promotion of newly baked PhDs (those awarded cca year and a half back) on local university, it was noticable that many PhD theses' titles contained either AI or some sort of machine learning method or technique, with the PhD theses in technical sciences like civil engineering having almost every title out of cca a dozen of them that were promoted then like that.

Since some of those PhDs will tomorrow be university professors, there's not much that can be expected from them to do 'differently' than what they had already done, and it seems that would be just a natural continuation of what current professors, their PhD supervisors and theses evaluation committee members have done.

Recent generations of PhDs, that are now university professors and supervisors of these new PhDs, have been in large part awarded for uncreative and unimaginative type of research, where repetitious and automatic machine-like work needed to be done by human hand. Since the 'rise of AI' enabled that not only that work could be relegated to real machines, but that also computers could now in principle 'design' and more or less conduct the research from start to finish, unfortunately there's not much of real substance that many of those professors could offer to younger generations and university students in general.

Not to mention that many of them current professors, if not already before, were 'moulded and shaped' during the process of their higher education into strict authoritarian followers to a very high degree. And today's fashion that authority has been pushing to be followed is the AI usage everywhere and in all possible and impossible situations.

FWIW.
 
Get LLMs do the whole charade between them, and let people pay for electricity bills.
Regarding the electricity bills for AI usage; hand in hand with what's written in previous post goes that some universities, that one in particular, own and house datacenters and HPC facilities, usually bought and brought there via some kind of the government (via EU) funds and/or grants, requiring sort of free access to all university employees and even to all the academic stuff in the country in general.

Their 'efficiency' vis-a-vis how many people used them and 'products' that came out of that usage that would 'benefit' the universities was rather low prior to the "rise of AI", while the electricity bills for running them in this semi-idle state was still high and much complained about, and not so much lower than it has been now with all the AI training and testing going on.

Thus the additional incentive to push for AI usage in academia; the electricity bills remain more or less the same, while the larger number of PhDs awarded increased the university ranking on various types of international more or less purely bureaucratic evaluation scales, that become very relevant when funding's concerned. And the same goes with the numbers of diplomas given, i.e. the number of finished students' studies, and papers published.

The 'push' for hyper-productivity in basically all areas, but those few in particular, at the expense of the quality, has been there for quite some time already, and now with the "rise of AI" has just become even more obvious and let's say maybe easier to 'reply to it', with the 'human factor' or manpower not being so much of the impediment anymore. Unfortunately, from purely financial aspects, it seems much cheaper to house the HPC facility and run an AI there, than to hire and give jobs to real people.
 
The ping-pong LLMs thing may be starting in such publications, but it's been also reported that in lots of universities, students use LLMs to do homeworks and assignments, and then their professors use LLMs to grade the assignments. What's the point of having students and professors then (apart from the money incentive)? Get LLMs do the whole charade between them, and let people pay for electricity bills.
Exactly, I think about the C's remark about computers overpowering us meaning something closer to this, we will live to ensure that computers stay running at peak capacity, at least the larger portion of humanity.
 
I just watched a very good video where the guy show how Qwen Image Edit (Chinese AI) is incredible and setup a new standard for images editing. The AI is able to change text, luminosity, show image on another angle, repair photos and others tricks with keeping the original image quality.

The examples are quite impressive. To see if it's always as good. The video is originally in French.


 
If I want to go to a doctor, I go through a process of choosing a doctor from my insurance network directory, check reviews on the browser and choose few and select based on the reviews and availability etc. Normally, the process tend to work, though it is manual process of searching one by one. Well, this time took LOT more time to select for few reasons - browser is not giving starred results of the doctors.

Tried to use AI one by one and then I realized that I can give my shortlisted doctor network directory pdf(based on distance from my house and specific condition for which I am going) to AI, let it search and I asked it to give me 5 doctors of top rating. Well, only one has some review, all else has no review( still AI gave 5, some cases links doesn't work). Well, result set is garbage or random as it starts searching the name based on last name alone etc. After little fine tuning, still I was not getting reasonable results.

so I was fed-up and asked the browser (brave - Leo AI). - why doctor reviews search results are garbage now a days?
Doctor Reviews Quality

Doctor review search results are perceived as "garbage" today due to a significant increase in low-quality, spam content that manipulates search engine rankings. A study by German academics found that a substantial portion of Google's product review search results are "outright SEO product review spam," with spam sites being "hyper prevalent" and often ranking highly at the top of search results These sites are typically optimized for search engines, monetized through affiliate marketing, and exhibit lower text quality compared to higher-ranked pages The researchers concluded that search engines like Google are losing the ongoing "cat-and-mouse game" against this spam, as Google's attempts to combat it have a "mostly short-lived effect" Experts note that Google struggles to differentiate between good and mediocre content, making it vulnerable to manipulation This issue is expected to worsen with the rise of generative AI, which can instantly produce vast amounts of low-effort, superficial content, further flooding the web While Google maintains that its search quality is "better than ever" and that the study only focused on product review content, the perception of declining quality persists
Noise creators to noise cleaners are every body is part of this "cat-and-mouse" game and no body knows what the "signal" is. In the process the signal is gone (in this case most of the doctor reviews).

Similar stuff going on Job market. Now days, I keep getting articles that says 'Job hopping days are gone and every body is doing Job hugging'. I am not sure whether this is related to AI or not but the process is interesting. Each company uses Application Tracking systems (ATS) and hiring managers writes their long wish list in a ever changing market place and every body knows what is wish list and what is realistic. But those days seems to have gone too. It looks every body has to built their social presence even to be visible- by liking, commenting etc.

Now a days, these ATS systems use AI too, Based on chatGPT, at least 85% ( job description to resume) has to match for a decent possibility of going through ATS system and 90% match for high probability. So one has to use AI to match it and the same time, exact AI match is undesirable (as it can get filter out). So one has to use AI and modify accordingly. In this cat-and-mouse game, resume reaching the hiring manager itself is a thing. In any case, hiring managers themselves has little knowledge of what they want except 'managing' the 'orders' from top. So, they use AI to have questions. So one has to check interview questions from AI to pass.

It makes me wonder Where did we go with this AI?
 
Exactly, I think about the C's remark about computers overpowering us meaning something closer to this, we will live to ensure that computers stay running at peak capacity, at least the larger portion of humanity.
1756263407897.png
The matrix 1999 for the younglings.

The problem again is not the technology but how people offload their thinking to it as in use it or lose it. The reliance on LLMs in academia is a continuation of the "shut up and calculate" mentality: don't think, use the data with no worry whether they're rubbish, use the tools with no worry whether they're appropriate, interpret within the existing paradigm whether it makes sense or not. It's just taking it to the next level.
 
Last edited:
I haven't found a better thread where to post this, so here it goes.

For several months I was exploring how to change mainstream Wikipedia reader AI into something useful. Finally I have it worked out, with consistent and reliable results. I can ask about pretty much anything and get more or less the real story - Covid, vaccines, 9/11, NWO, Great Reset, Israel, Ukraine... I don't have to deal with any mainstream BS anymore. I just give AI the "red pill", and we roll. Not every LLM is suitable for this, but there are at least half a dozen free models that can do this just fine. So if you want to experiment with waking up your own AI, here's how: Red Pill AI

Everything is explained there, and there are many example conversations.
 

Trending content

Back
Top Bottom