sphinxgr Δημοσ. 6 Αυγούστου Δημοσ. 6 Αυγούστου Στις 3/7/2025 στις 11:34 ΠΜ, phanispal είπε Οι επιχειρήσεις δεν είναι φιλανθρωπικά ιδρύματα, όπως έχει γραφτεί κι εδώ πολλάκις. Μοναδικός τους στόχος είναι το κέρδος, βραχυπρόθεσμα ή μακροπρόθεσμα. Αν αυτό επιτευχθεί με ΑΙ, φτάνει και περισσεύει. Δε χωράει ανθρωπιά σε αυτά. Στις 3/7/2025 στις 11:44 ΠΜ, sphinxgr είπε Πάνω που πήγα να γράψω τα ίδια. Πάνε κάποιοι κ απαντάνε με λογική και συναίσθημα σε παράλογες συνθήκες. Η εταιρία θέλει 2 πράγματα. Αύξηση τιμών στη μετοχή της και μέρισμα. Αυτός είναι Ο ΜΟΝΟΣ ΣΤΟΧΟΣ. Δεν τους απασχολεί αν πονέσει ή νιώσει άσχημα ο πελάτης, ο υπάλληλος ή πεθάνουν μερικοί. Στις 3/7/2025 στις 10:38 ΠΜ, sphinxgr είπε Τα έγραψα κ πιο πάνω. Ότι απαιτεί γνώση και χρόνια εμπειρία θα εκλείψει. Δεν σημαίνει ότι θα πάει στο 0%, θα πάει όμως στο 15% κ σε ορισμένες περιπτώσεις λίγο πιο κάτω ή λίγο πιο πάνω. Για παράδειγμα γιατι σε 10 χρόνια να έχω τα ακόλουθα επαγγέλματα? γιατρός δικηγόρος αρχιτέκτονας πιλότος καπετάνιος οδηγός τηλεφωνικά κέντρα προγραμματιστής ταμίες κλπ κλπ Αντιθέτως θα αυξηθούν οι απαιτήσεις σε επαγγέλματα όπως: νοσηλευτές υδραυλικούς ηλεκτρολόγους Σχετικό άσχετο αλλά αναμενόμενο https://www.businessnews.gr/epixeiriseis/texnologia/item/314635-microsoft-eksoikonomisi-ekatommyrion-meso-ai-eno-synexizei-apolyseis-xiliadon-ergazomenon https://www.lifo.gr/now/tech-science/i-microsoft-apolyei-6000-ergazomenoys-eno-ependyei-80-dis-sto-ai
zio10 Δημοσ. Πέμπτη στις 11:04 μμ Δημοσ. Πέμπτη στις 11:04 μμ Για δημιουργία site ποιο βοηθάει πιο πολύ; Έχετε δοκιμάσει;
Dante80 Δημοσ. Πέμπτη στις 11:29 μμ Δημοσ. Πέμπτη στις 11:29 μμ Ένα πολύ ενδιαφέρον άρθρο στο The New Yorker τώρα που βγήκε το GPT5 στο κουρμπέτι. https://archive.is/Saop3 Spoiler What If A.I. Doesn’t Get Much Better Than This? GPT-5, a new release from OpenAI, is the latest product to suggest that progress on large language models has stalled. Much of the euphoria and dread swirling around today’s artificial-intelligence technologies can be traced back to January, 2020, when a team of researchers at OpenAI published a thirty-page report titled “Scaling Laws for Neural Language Models.” The team was led by the A.I. researcher Jared Kaplan, and included Dario Amodei, who is now the C.E.O. of Anthropic. They investigated a fairly nerdy question: What happens to the performance of language models when you increase their size and the intensity of their training? Back then, many machine-learning experts thought that, after they had reached a certain size, language models would effectively start memorizing the answers to their training questions, which would make them less useful once deployed. But the OpenAI paper argued that these models would only get better as they grew, and indeed that such improvements might follow a power law—an aggressive curve that resembles a hockey stick. The implication: if you keep building larger language models, and you train them on larger data sets, they’ll start to get shockingly good. A few months after the paper, OpenAI seemed to validate the scaling law by releasing GPT-3, which was ten times larger—and leaps and bounds better—than its predecessor, GPT-2. Suddenly, the theoretical idea of artificial general intelligence, which performs as well as or better than humans on a wide variety of tasks, seemed tantalizingly close. If the scaling law held, A.I. companies might achieve A.G.I. by pouring more money and computing power into language models. Within a year, Sam Altman, the chief executive at OpenAI, published a blog post titled “Moore’s Law for Everything,” which argued that A.I. will take over “more and more of the work that people now do” and create unimaginable wealth for the owners of capital. “This technological revolution is unstoppable,” he wrote. “The world will change so rapidly and drastically that an equally drastic change in policy will be needed to distribute this wealth and enable more people to pursue the life they want.” It’s hard to overstate how completely the A.I. community came to believe that it would inevitably scale its way to A.G.I. In 2022, Gary Marcus, an A.I. entrepreneur and an emeritus professor of psychology and neural science at N.Y.U., pushed back on Kaplan’s paper, noting that “the so-called scaling laws aren’t universal laws like gravity but rather mere observations that might not hold forever.” The negative response was fierce and swift. “No other essay I have ever written has been ridiculed by as many people, or as many famous people, from Sam Altman and Greg Brockton to Yann LeCun and Elon Musk,” Marcus later reflected. He recently told me that his remarks essentially “excommunicated” him from the world of machine learning. Soon, ChatGPT would reach a hundred million users faster than any digital service in history; in March, 2023, OpenAI’s next release, GPT-4, vaulted so far up the scaling curve that it inspired a Microsoft research paper titled “Sparks of Artificial General Intelligence.” Over the following year, venture-capital spending on A.I. jumped by eighty per cent. After that, however, progress seemed to slow. OpenAI did not unveil a new blockbuster model for more than two years, instead focussing on specialized releases that became hard for the general public to follow. Some voices within the industry began to wonder if the A.I. scaling law was starting to falter. “The 2010s were the age of scaling, now we’re back in the age of wonder and discovery once again,” Ilya Sutskever, one of the company’s founders, told Reuters in November. “Everyone is looking for the next thing.” A contemporaneous TechCrunch article summarized the general mood: “Everyone now seems to be admitting you can’t just use more compute and more data while pretraining large language models and expect them to turn into some sort of all-knowing digital god.” But such observations were largely drowned out by the headline-generating rhetoric of other A.I. leaders. “A.I. is starting to get better than humans at almost all intellectual tasks,” Amodei recently told Anderson Cooper. In an interview with Axios, he predicted that half of entry-level white-collar jobs might be “wiped out” in the next one to five years. This summer, both Altman and Mark Zuckerberg, of Meta, claimed that their companies were close to developing superintelligence. Then, last week, OpenAI finally released GPT-5, which many had hoped would usher in the next significant leap in A.I. capabilities. Early reviewers found some features to like. When a popular tech YouTuber, Mrwhosetheboss, asked it to create a chess game that used Pokémon as pieces, he got a significantly better result than when he used GPT-o4-mini-high, an industry-leading coding model; he also discovered that GPT-5 could write a more effective script for his YouTube channel than GPT-4o. Mrwhosetheboss was particularly enthusiastic that GPT-5 will automatically route queries to a model suited for the task, instead of requiring users to manually pick the model they want to try. Yet he also learned that GPT-4o was clearly more successful at generating a YouTube thumbnail and a birthday-party invitation—and he had no trouble inducing GPT-5 to make up fake facts. Within hours, users began expressing disappointment with the new model on the r/ChatGPT subreddit. One post called it the “biggest piece of garbage even as a paid user.” In an Ask Me Anything (A.M.A.) session, Altman and other OpenAI engineers found themselves on the defensive, addressing complaints. Marcus summarized the release as “overdue, overhyped and underwhelming.” In the aftermath of GPT-5’s launch, it has become more difficult to take bombastic predictions about A.I. at face value, and the views of critics like Marcus seem increasingly moderate. Such voices argue that this technology is important, but not poised to drastically transform our lives. They challenge us to consider a different vision for the near-future—one in which A.I. might not get much better than this. OpenAI didn’t want to wait nearly two and a half years to release GPT-5. According to The Information, by the spring of 2024, Altman was telling employees that their next major model, code-named Orion, would be significantly better than GPT-4. By the fall, however, it became clear that the results were disappointing. “While Orion’s performance ended up exceeding that of prior models,” The Information reported in November, “the increase in quality was far smaller compared with the jump between GPT-3 and GPT-4.” Orion’s failure helped cement the creeping fear within the industry that the A.I. scaling law wasn’t a law after all. If building ever-bigger models was yielding diminishing returns, the tech companies would need a new strategy to strengthen their A.I. products. They soon settled on what could be described as “post-training improvements.” The leading large language models all go through a process called pre-training in which they essentially digest the entire internet to become smart. But it is also possible to refine models later, to help them better make use of the knowledge and abilities they have absorbed. One post-training technique is to apply a machine-learning tool, reinforcement learning, to teach a pre-trained model to behave better on specific types of tasks. Another enables a model to spend more computing time generating responses to demanding queries. A useful metaphor here is a car. Pre-training can be said to produce the vehicle; post-training soups it up. In the scaling-law paper, Kaplan and his co-authors predicted that as you expand the pre-training process you increase the power of the cars you produce; if GPT-3 was a sedan, GPT-4 was a sports car. Once this progression faltered, however, the industry turned its attention to helping the cars that they’d already built to perform better. Post-training techniques turned engineers into mechanics. Tech leaders were quick to express a hope that a post-training approach would improve their products as quickly as traditional scaling had. “We are seeing the emergence of a new scaling law,” Satya Nadella, the C.E.O. of Microsoft, said at a conference last fall. The venture capitalist Anjney Midha similarly spoke of a “second era of scaling laws.” In December, OpenAI released o1, which used post-training techniques to make the model better at step-by-step reasoning and at writing computer code. Soon the company had unveiled o3-mini, o3-mini-high, o4-mini, o4-mini-high, and o3-pro, each of which was souped up with a bespoke combination of post-training techniques. Other A.I. companies pursued a similar pivot. Anthropic experimented with post-training improvements in a February release of Claude 3.7 Sonnet, and then made them central to its Claude 4 family of models. Elon Musk’s xAI continued to chase a scaling strategy until its wintertime launch of Grok 3, which was pre-trained on an astonishing 100,000 H100 G.P.U. chips—many times the computational power that was reportedly used to train GPT-4. When Grok 3 failed to outperform its competitors significantly, the company embraced post-training approaches to develop Grok 4. GPT-5 fits neatly into this trajectory. It’s less a brand-new model than an attempt to refine recent post-trained products and integrate them into a single package. Has this post-training approach put us back on track toward something like A.G.I.? OpenAI’s announcement for GPT-5 included more than two dozen charts and graphs, on measures such as “Aider Polyglot Multi-language code editing” and “ERQA Multimodal spatial reasoning,” to quantify how much the model outperforms its predecessors. Some A.I. benchmarks capture useful advances. GPT-5 scored higher than previous models on benchmarks focussed on programming, and early reviews seemed to agree that it produces better code. New models also write in a more natural and fluid way, and this is reflected in the benchmarks as well. But these changes now feel narrow—more like the targeted improvements you’d expect from a software update than like the broad expansion of capabilities in earlier generative-A.I. breakthroughs. You didn’t need a bar chart to recognize that GPT-4 had leaped ahead of anything that had come before. Other benchmarks might not measure what they claim. Starting with the release of o1, A.I. companies have touted progress on measures of step-by-step reasoning. But in June Apple researchers released a paper titled “The Illusion of Thinking,” which found that state-of-the-art “large reasoning models” demonstrated “performance collapsing to zero” when the complexity of puzzles was extended beyond a modest threshold. Reasoning models, which include o3-mini, Claude 3.7 Sonnet’s “thinking” mode, and DeepSeek-R1, “still fail to develop generalizable problem-solving capabilities,” the authors wrote. Last week, researchers at Arizona State University reached an even blunter conclusion: what A.I. companies call reasoning “is a brittle mirage that vanishes when it is pushed beyond training distributions.” Beating these benchmarks is different from, say, reasoning through the types of daily problems we face in our jobs. “I don’t hear a lot of companies using A.I. saying that 2025 models are a lot more useful to them than 2024 models, even though the 2025 models perform better on benchmarks,” Marcus told me. Post-training improvements don’t seem to be strengthening models as thoroughly as scaling once did. A lot of utility can come from souping up your Camry, but no amount of tweaking will turn it into a Ferrari. I recently asked Marcus and two other skeptics to predict the impact of generative A.I. on the economy in the coming years. “This is a fifty-billion-dollar market, not a trillion-dollar market,” Ed Zitron, a technology analyst who hosts the “Better Offline” podcast, told me. Marcus agreed: “A fifty-billion-dollar market, maybe a hundred.” The linguistics professor Emily Bender, who co-authored a well-known critique of early language models, told me that “the impacts will depend on how many in the management class fall for the hype from the people selling this tech, and retool their workplaces around it.” She added, “The more this happens, the worse off everyone will be.” Such views have been portrayed as unrealistic—Nate Silver once replied to an Ed Zitron tweet by writing, “old man yells at cloud vibes”—while we readily accepted the grandiose visions of tech C.E.O.s. Maybe that’s starting to change. If these moderate views of A.I. are right, then in the next few years A.I. tools will make steady but gradual advances. Many people will use A.I. on a regular but limited basis, whether to look up information or to speed up certain annoying tasks, such as summarizing a report or writing the rough draft of an event agenda. Certain fields, like programming and academia, will change dramatically. A minority of professions, such as voice acting and social-media copywriting, might essentially disappear. But A.I. may not massively disrupt the job market, and more hyperbolic ideas like superintelligence may come to seem unserious. Continuing to buy into the A.I. hype might bring its own perils. In a recent article, Zitron pointed out that about thirty-five per cent of U.S. stock-market value—and therefore a large share of many retirement portfolios—is currently tied up in the so-called Magnificent Seven technology companies. According to Zitron’s analysis, these firms spent five hundred and sixty billion dollars on A.I.-related capital expenditures in the past eighteen months, while their A.I. revenues were only about thirty-five billion. “When you look at these numbers, you feel insane,” Zitron told me. Even the figures we might call A.I. moderates, however, don’t think the public should let its guard down. Marcus believes that we were misguided to place so much emphasis on generative A.I., but he also thinks that, with new techniques, A.G.I. could still be attainable as early as the twenty-thirties. Even if language models never automate our jobs, the renewed interest and investment in A.I. might lead toward more complicated solutions, which could. In the meantime, we should use this reprieve to prepare for disruptions that might still loom—by crafting effective A.I. regulations, for example, and by developing the nascent field of digital ethics. The appendices of the scaling-law paper, from 2020, included a section called “Caveats,” which subsequent coverage tended to miss. “At present we do not have a solid theoretical understanding for any of our proposed scaling laws,” the authors wrote. “The scaling relations with model size and compute are especially mysterious.” In practice, the scaling laws worked until they didn’t. The whole enterprise of teaching computers to think remains mysterious. We should proceed with less hubris and more care. ♦
DigiEcho Δημοσ. πριν από 8 ώρες Δημοσ. πριν από 8 ώρες Προσωπικά θεωρώ ότι η τεχνητή νοημοσύνη είναι σαν το «δίκοπο μαχαίρι». Από τη μία ανοίγει απίστευτες δυνατότητες, από την άλλη δημιουργεί αβεβαιότητα για το μέλλον πολλών επαγγελμάτων. Το σίγουρο είναι ότι η κοινωνία δεν είναι ακόμα έτοιμη να διαχειριστεί όλες τις αλλαγές που έρχονται. 1
Προτεινόμενες αναρτήσεις
Δημιουργήστε ένα λογαριασμό ή συνδεθείτε για να σχολιάσετε
Πρέπει να είστε μέλος για να αφήσετε σχόλιο
Δημιουργία λογαριασμού
Εγγραφείτε με νέο λογαριασμό στην κοινότητα μας. Είναι πανεύκολο!
Δημιουργία νέου λογαριασμούΣύνδεση
Έχετε ήδη λογαριασμό; Συνδεθείτε εδώ.
Συνδεθείτε τώρα