AI Index: State of AI in 13 Charts

In the new report, foundation models dominate, benchmarks fall, prices skyrocket, and on the global stage, the U.S. overshadows.

Illustration of bright lines intersecting on a dark background

This year’s AI Index — a 500-page report tracking 2023’s worldwide trends in AI — is out.

The index is an independent initiative at the Stanford Institute for Human-Centered Artificial Intelligence (HAI), led by the AI Index Steering Committee, an interdisciplinary group of experts from across academia and industry. This year’s report covers the rise of multimodal foundation models, major cash investments into generative AI, new performance benchmarks, shifting global opinions, and new major regulations.

Don’t have an afternoon to pore through the findings? Check out the high level here.

Pie chart showing 98 models were open-sourced in 2023

A Move Toward Open-Sourced

This past year, organizations released 149 foundation models, more than double the number released in 2022. Of these newly released models, 65.7% were open-source (meaning they can be freely used and modified by anyone), compared with only 44.4% in 2022 and 33.3% in 2021.

bar chart showing that closed models outperformed open models across tasks

But At a Cost of Performance?

Closed-source models still outperform their open-sourced counterparts. On 10 selected benchmarks, closed models achieved a median performance advantage of 24.2%, with differences ranging from as little as 4.0% on mathematical tasks like GSM8K to as much as 317.7% on agentic tasks like AgentBench.

Bar chart showing Google has more foundation models than any other company

Biggest Players

Industry dominates AI, especially in building and releasing foundation models. This past year Google edged out other industry players in releasing the most models, including Gemini and RT-2. In fact, since 2019, Google has led in releasing the most foundation models, with a total of 40, followed by OpenAI with 20. Academia trails industry: This past year, UC Berkeley released three models and Stanford two.

Line chart showing industry far outpaces academia and government in creating foundation models over the decade

Industry Dwarfs All

If you needed more striking evidence that corporate AI is the only player in the room right now, this should do it. In 2023, industry accounted for 72% of all new foundation models.

Chart showing the growing costs of training AI models

Prices Skyrocket

One of the reasons academia and government have been edged out of the AI race: the exponential increase in cost of training these giant models. Google’s Gemini Ultra cost an estimated $191 million worth of compute to train, while OpenAI’s GPT-4 cost an estimated $78 million. In comparison, in 2017, the original Transformer model, which introduced the architecture that underpins virtually every modern LLM, cost around $900.

Bar chart showing the united states produces by far the largest number of foundation models

What AI Race?

At least in terms of notable machine learning models, the United States vastly outpaced other countries in 2023, developing a total of 61 models in 2023. Since 2019, the U.S. has consistently led in originating the majority of notable models, followed by China and the UK.

Line chart showing that across many intellectual task categories, AI has exceeded human performance

Move Over, Human

As of 2023, AI has hit human-level performance on many significant AI benchmarks, from those testing reading comprehension to visual reasoning. Still, it falls just short on some benchmarks like competition-level math. Because AI has been blasting past so many standard benchmarks, AI scholars have had to create new and more difficult challenges. This year’s index also tracked several of these new benchmarks, including those for tasks in coding, advanced reasoning, and agentic behavior.

Bar chart showing a dip in overall private investment in AI, but a surge in generative AI investment

Private Investment Drops (But We See You, GenAI)

While AI private investment has steadily dropped since 2021, generative AI is gaining steam. In 2023, the sector attracted $25.2 billion, nearly ninefold the investment of 2022 and about 30 times the amount from 2019 (call it the ChatGPT effect). Generative AI accounted for over a quarter of all AI-related private investments in 2023.

Bar chart showing the united states overwhelming dwarfs other countries in private investment in AI

U.S. Wins $$ Race

And again, in 2023 the United States dominates in AI private investment. In 2023, the $67.2 billion invested in the U.S. was roughly 8.7 times greater than the amount invested in the next highest country, China, and 17.8 times the amount invested in the United Kingdom. That lineup looks the same when zooming out: Cumulatively since 2013, the United States leads investments at $335.2 billion, followed by China with $103.7 billion, and the United Kingdom at $22.3 billion.

Infographic showing 26% of businesses use AI for contact-center automation, and 23% use it for personalization

Where is Corporate Adoption?

More companies are implementing AI in some part of their business: In surveys, 55% of organizations said they were using AI in 2023, up from 50% in 2022 and 20% in 2017. Businesses report using AI to automate contact centers, personalize content, and acquire new customers. 

Bar chart showing 57% of people believe AI will change how they do their job in 5 years, and 36% believe AI will replace their jobs.

Younger and Wealthier People Worry About Jobs

Globally, most people expect AI to change their jobs, and more than a third expect AI to replace them. Younger generations — Gen Z and millennials — anticipate more substantial effects from AI compared with older generations like Gen X and baby boomers. Specifically, 66% of Gen Z compared with 46% of boomer respondents believe AI will significantly affect their current jobs. Meanwhile, individuals with higher incomes, more education, and decision-making roles foresee AI having a great impact on their employment.

Bar chart depicting the countries most nervous about AI; Australia at 69%, Great Britain at 65%, and Canada at 63% top the list

While the Commonwealth Worries About AI Products

When asked in a survey about whether AI products and services make you nervous, 69% of Aussies and 65% of Brits said yes. Japan is the least worried about their AI products at 23%.  

Line graph showing uptick in AI regulation in the united states since 2016; 25 policies passed in 2023

Regulation Rallies

More American regulatory agencies are passing regulations to protect citizens and govern the use of AI tools and data. For example, the Copyright Office and the Library of Congress passed copyright registration guidance concerning works that contained material generated by AI, while the Securities and Exchange Commission developed a cybersecurity risk management strategy, governance, and incident disclosure plan. The agencies to pass the most regulation were the Executive Office of the President and the Commerce Department. 

The AI Index was first created to track AI development. The index collaborates with such organizations as LinkedIn, Quid, McKinsey, Studyportals, the Schwartz Reisman Institute, and the International Federation of Robotics to gather the most current research and feature important insights on the AI ecosystem. 

More News Topics

Notes from the AI frontier: Modeling the impact of AI on the world economy

The role of artificial intelligence (AI) tools and techniques in business and the global economy is a hot topic. This is not surprising given that AI might usher in radical—arguably unprecedented—changes in the way people live and work. The AI revolution is not in its infancy, but most of its economic impact is yet to come.

New research from the McKinsey Global Institute attempts to simulate the impact of AI on the world economy. First, it builds on an understanding of the behavior of companies and the dynamics of various sectors to develop a bottom-up view of how to adopt and absorb AI technologies. Second, it takes into account the likely disruptions that countries, companies, and workers are likely to experience as they transition to AI. There will very probably be costs during this transition period, and they need to be factored into any estimate. The analysis examines how economic gains and losses are likely to be distributed among firms, employees, and countries and how this distribution could potentially hamper the capture of AI benefits. Third, the research examines the dynamics of AI for a wide range of countries—clustered into groups with similar characteristics—with the aim of giving a more global view.

The analysis should be seen as a guide to the potential economic impact of AI based on the best knowledge available at this stage. Among the major findings are the following:

There is large potential for AI to contribute to global economic activity

A key challenge is that adoption of ai could widen gaps among countries, companies, and workers.

research paper on artificial intelligence in banking sector

The McKinsey Global Institute looked at five broad categories of AI: computer vision, natural language, virtual assistants, robotic process automation, and advanced machine learning. Companies will likely use these tools to varying degrees. Some will take an opportunistic approach, testing only one technology and piloting it in a specific function (an approach our modeling calls adoption). Others might be bolder, adopting all five and then absorbing them across the entire organization (an approach we call full absorption). In between these two poles, there will be many companies at different stages of adoption; the model also captures this partial impact.

By 2030, the average simulation shows that some 70 percent of companies might have adopted at least one type of AI technology but that less than half will have fully absorbed the five categories. The pattern of adoption and full absorption might be relatively rapid—at the high end of what has been observed with other technologies .

Several barriers might hinder rapid adoption and absorption (see video, “A minute with the McKinsey Global Institute: Challenges of adopting automation technology”). For instance, late adopters might find it difficult to generate impact from AI, because front-runners have already captured AI opportunities and late adopters lag in developing capabilities and attracting talent.

Nevertheless, at the global average level of adoption and absorption implied by our simulation, AI has the potential to deliver additional global economic activity of around $13 trillion by 2030, or about 16 percent higher cumulative GDP compared with today. This amounts to 1.2 percent additional GDP growth per year. If delivered, this impact would compare well with that of other general-purpose technologies through history.

A number of factors, including labor automation, innovation, and new competition, affect AI-driven productivity growth. Micro factors, such as the pace of adoption of AI, and macro factors, such as the global connectedness or labor-market structure of a country, both contribute to the size of the impact.

Our simulation examined seven possible channels of impact. The first three relate to the impact of AI adoption on the need for, and mix of, production factors that have direct impact on company productivity. The other four are externalities linked to the adoption of AI related to the broad economic environment and the transition to AI. We acknowledge that these seven channels are not definitive or necessarily comprehensive but rather a starting point based on our current understanding and trends currently under way (Exhibit 1).

The impact of AI might not be linear but could build up at an accelerating pace over time. Its contribution to growth might be three or more times higher by 2030 than it is over the next five years. An S-curve pattern of adoption and absorption of AI is likely—a slow start due to the substantial costs and investment associated with learning and deploying these technologies, then an acceleration driven by the cumulative effect of competition and an improvement in complementary capabilities alongside process innovations.

It would be a misjudgment to interpret this “slow burn” pattern of impact as proof that the effect of AI will be limited. The size of benefits for those who move early into these technologies will build up in later years at the expense of firms with limited or no adoption.

Section 2

Although Al can deliver a boost to economic activity, the benefits are likely to be uneven.

How AI could affect countries

Potentially, AI might widen gaps between countries, reinforcing the current digital divide. Countries might need different strategies and responses as AI-adoption rates vary.

Leaders of AI adoption (mostly in developed countries) could increase their lead over developing countries. Leading AI countries could capture an additional 20 to 25 percent in net economic benefits, compared with today, while developing countries might capture only about 5 to 15 percent. Many developed countries might have no choice but to push AI to capture higher productivity growth as their GDP-growth momentum slows—in many cases, partly reflecting the challenge due to aging populations. Moreover, in these economies, wage rates are high, which means that there is more incentive to substitute labor with machines than there is in low-wage, developing countries.

Would you like to learn more about the McKinsey Global Institute ?

In contrast, developing countries tend to have other ways, including catching up with best practices and restructuring their industries, to improve their productivity. Therefore, they might have less incentive to push for AI (which, in any case, might offer them a relatively smaller economic benefit than it does advanced economies). Some developing countries might prove to be exceptions to this rule. For instance, China has a national strategy in place to become a global leader in the AI supply chain and is investing heavily.

How AI could affect companies

It is possible that AI technologies could lead to a performance gap between front-runners (companies that fully absorb AI tools across their enterprises over the next five to seven years) and nonadopters (companies that do not adopt AI technologies at all or have not fully absorbed them in their enterprises by 2030).

At one end of the spectrum, front-runners are likely to benefit disproportionately. By 2030, they could potentially double their cash flow (economic benefit captured minus associated investment and transition costs). This implies additional annual net cash-flow growth of about 6 percent for longer than the next decade. Front-runners tend to have a strong starting IT base, a higher propensity to invest in AI, and positive views of the business case for AI.

At the other end of the spectrum, nonadopters might experience around a 20 percent decline in their cash flow from today’s levels, assuming the same cost and revenue model as today. One important driver of this profit pressure is the existence of strong competitive dynamics among companies that could shift market share from laggards to front-runners and might prompt debate about the unequal distribution of the benefits of AI (Exhibit 2).

How AI could affect workers

A widening gap might unfold at the level of individual workers (see video, “A minute with the McKinsey Global Institute: What AI can and can’t [yet] do”). Demand for jobs could shift away from repetitive tasks toward those that are socially and cognitively driven and require more digital skills . Job profiles characterized by repetitive activities or that require a low level of digital skills could experience the largest decline as a share of total employment to around 30 percent by 2030, from some 40 percent. The largest gain in share could be in nonrepetitive activities and those that require high digital skills, rising from roughly 40 percent to more than 50 percent.

Notes from the frontier: Modeling the impact of AI on the world economy

These shifts would have an impact on wages. We simulate that around 13 percent of the total wage bill could shift to categories requiring nonrepetitive and high digital skills, where incomes could rise, while workers in the repetitive and low-digital-skills categories could experience a stagnation or even a cut in their wages. The share of the total wage bill of the latter group could decline to 20 percent, from 33 percent.

A direct consequence of this widening gap in employment and wages would be an intensifying war for people, particularly those skilled in developing and using AI tools. On the other hand is the potential for structural excess supply for a still relatively high portion of people lacking the digital and cognitive skills necessary to work with machines.

Overall, the adoption and absorption of AI might not have a significant impact on net employment. There will likely be substantial pressure on full-time-employment demand, but the total net impact in aggregate might be more limited than many fear. Our average global scenario suggests that total full-time-equivalent-employment demand might remain flat, or even that there could be a slightly negative net impact on jobs by 2030.

The opportunity of AI is significant, but there is no doubt that its penetration might cause disruption. The productivity dividend of AI probably will not materialize immediately. Its impact is likely to build up at an accelerated pace over time; therefore, the benefits of initial investment might not be visible in the short term. Patience and long-term strategic thinking will be required.

Policy makers will need to show bold leadership to overcome understandable discomfort among citizens about the perceived threat to their jobs as automation takes hold. Companies will also be important actors in searching for solutions on the mammoth task of skilling and reskilling people to work with AI. Individuals will need to adjust to a new world in which job turnover could be more frequent, they might have to transition to new types of employment, and they likely must continually refresh and update their skills to match the needs of a dynamically changing job market.

Using historical trends of new jobs created to old jobs, and adjusting for a lower labor-output ratio that considers the likely labor-saving nature of AI technologies via smart automation, new jobs driven by investment in AI could augment employment by about 5 percent by 2030. The total productivity effect could have a positive contribution to employment of about 10 percent.

Stay current on your favorite topics

Jacques Bughin is a senior partner at the McKinsey Global Institute, where Jeongmin Seong is a senior fellow, James Manyika is chairman and a director, and Michael Chui is a partner. Raoul Joshi is a consultant in McKinsey’s Stockholm office.

Explore a career with us

Related articles.

Visualizing the uses and potential impact of AI and other analytics

Notes from the AI frontier: Applications and value of deep learning

Visualizing the uses and potential impact of AI and other analytics

Visualizing the uses and potential impact of AI and other analytics

Executive's guide to AI

An executive’s guide to AI

  • Share full article

For more audio journalism and storytelling, download New York Times Audio , a new iOS app available for news subscribers.

A.I.’s Original Sin

A times investigation found that tech giants altered their own rules to train their newest artificial intelligence systems..

This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors. Please review the episode audio before quoting from this transcript and email [email protected] with any questions.

From “The New York Times,” I’m Michael Barbaro. This is “The Daily.”

[MUSIC PLAYING]

Today, a “Times” investigation shows how as the country’s biggest technology companies race to build powerful new artificial intelligence systems, they bent and broke the rules from the start.

My colleague Cade Metz on what he uncovered.

It’s Tuesday, April 16th.

Cade, when we think about all the artificial intelligence products released over the past couple of years, including, of course, these chatbots we’ve talked a lot about on the show, we so frequently talk about their future their future capabilities, their influence on society, jobs, our lives. But you recently decided to go back in time to AI’s past, to its origins to understand the decisions that were made, basically, at the birth of this technology. So why did you decide to do that?

Because if you’re thinking about the future of these chatbots, that is defined by their past. The thing you have to realize is that these chatbots learn their skills by analyzing enormous amounts of digital data.

So what my colleagues and I wanted to do with our investigation was really focus on that effort to gather more data. We wanted to look at the type of data these companies were collecting, how they were gathering it, and how they were feeding it into their systems.

And when you all undertake this line of reporting, what do you end up finding?

We found that three major players in this race OpenAI, Google, and Meta as they were locked into this competition to develop better and better artificial intelligence, they were willing to do almost anything to get their hands on this data, including ignoring, and in some cases, violating corporate rules and wading into a legal gray area as they gathered this data.

Basically, cutting corners.

Cutting corners left and right.

OK, let’s start with OpenAI, the flashiest player of all.

The most interesting thing we’ve found, is that in late 2021, as OpenAI, the startup in San Francisco that built ChatGPT, as they were pulling together the fundamental technology that would power that chatbot, they ran out of data, essentially.

They had used just about all the respectable English language text on the internet to build this system. And just let that sink in for a bit.

I mean, I’m trying to let that sink in. They basically, like a Pac-Man on a old game, just consumed almost all the English words on the internet, which is kind of unfathomable.

Wikipedia articles by the thousands, news articles, Reddit threads, digital books by the millions. We’re talking about hundreds of billions, even trillions of words.

So by the end of 2021, OpenAI had no more English language texts that they could feed into these systems, but their ambitions are such that they wanted even more.

So here, we should remember that if you’re gathering up all the English language text on the internet, a large portion of that is going to be copyrighted.

So if you’re one of these companies gathering data at that scale, you are absolutely gathering copyrighted data, as well.

Which suggests that, from the very beginning, these companies, a company like OpenAI with ChatGPT, is starting to break, bend the rules.

Yes, they are determined to build this technology thus they are willing to venture into what is a legal gray area.

So given that, what does OpenAI do once it, as you had said, runs out of English language words to mop up and feed into this system?

So they get together, and they say, all right, so what are other options here? And they say, well, what about all the audio and video on the internet? We could transcribe all the audio and video, turn it into text, and feed that into their systems.

Interesting.

So a small team at OpenAI, which included its president and co-founder Greg Brockman, built a speech-recognition technology called Whisper, which could transcribe audio files into text with high accuracy.

And then they gathered up all sorts of audio files, from across the internet, including audio books, podcasts —

— and most importantly, YouTube videos.

Hmm, of which there’s a seemingly endless supply, right? Fair to say maybe tens of millions of videos.

According to my reporting, we’re talking about at least 1,000,000 hours of YouTube videos were scraped off of that video sharing site, fed into this speech recognition system in order to produce new text for training OpenAI’s chatbot. And YouTube’s terms of service do not allow a company like OpenAI to do this. YouTube, which is owned by Google, explicitly says you are not allowed to, in internet parlance, scrape videos en masse from across YouTube and use those videos to build a new application.

That is exactly what OpenAI did. According to my reporting, employees at the company knew that it broke YouTube terms of service, but they resolved to do it anyway.

So, Cade, this makes me want to understand what’s going on over at Google, which as we have talked about in the past on the show, is itself, thinking about and developing its own artificial intelligence model and product.

Well, as OpenAI scrapes up all these YouTube videos and starts to use them to build their chatbot, according to my reporting, some employees at Google, at the very least, are aware that this is happening.

Yes, now when we went to the company about this, a Google spokesman said it did not know that OpenAI was scraping YouTube content and said the company takes legal action over this kind of thing when there’s a clear reason to do so. But according to my reporting, at least some Google employees turned a blind eye to OpenAI’s activities because Google was also using YouTube content to train its AI.

So if they raise a stink about what OpenAI is doing, they end up shining a spotlight on themselves. And they don’t want to do that.

I guess I want to understand what Google’s relationship is to YouTube. Because of course, Google owns YouTube. So what is it allowed or not allowed to do when it comes to feeding YouTube data into Google’s AI models?

It’s an important distinction. Because Google owns YouTube, it defines what can be done with that data. And Google argues that it has a right to that data, that its terms of service allow it to use that data. However, because of that copyright issue, because the copyright to those videos belong to you and I, lawyers who I’ve spoken to say, people could take Google to court and try to determine whether or not those terms of service really allow Google to do this. There’s another legal gray area here where, although Google argues that it’s OK, others may argue it’s not.

Of course, what makes this all so interesting is, you essentially have one tech company Google, keeping another tech company OpenAI’s dirty little secret about basically stealing from YouTube because it doesn’t want people to know that it too is taking from YouTube. And so these companies are essentially enabling each other as they simultaneously seem to be bending or breaking the rules.

What this shows is that there is this belief, and it has been there for years within these companies, among their researchers, that they have a right to this data because they’re on a larger mission to build a technology that they believe will transform the world.

And if you really want to understand this attitude, you can look at our reporting from inside Meta.

And so what does Meta end up doing, according to your reporting?

Well, like Google and other companies, Meta had to scramble to build artificial intelligence that could compete with OpenAI. Mark Zuckerberg is calling engineers and executives at all hours pushing them to acquire this data that is needed to improve the chatbot.

And at one point, my colleagues and I got hold of recordings of these Meta executives and engineers discussing this problem. How they could get their hands on more data where they should try to find it? And they explored all sorts of options.

They talked about licensing books, one by one, at $10 a pop and feeding those into the model.

They even discussed acquiring the book publisher Simon & Schuster and feeding its entire library into their AI model. But ultimately, they decided all that was just too cumbersome, too time consuming, and on the recordings of these meetings, you can hear executives talk about how they were willing to run roughshod over copyright law and ignore the legal concerns and go ahead and scrape the internet and feed this stuff into their models.

They acknowledged that they might be sued over this. But they talked about how OpenAI had done this before them. That they, Meta were just following what they saw as a market precedent.

Interesting, so they go from having conversations like, should we buy a publisher that has tons of copyrighted material suggesting that they’re very conscious of the kind of legal terrain and what’s right and what’s wrong. And instead say, nah, let’s just follow the OpenAI model, that blueprint and just do what we want to do, do what we think we have a right to do, which is to kind of just gobble up all this material across the internet.

It’s a snapshot of that Silicon Valley attitude that we talked about. Because they believe they are building this transformative technology, because they are in this intensely competitive situation where money and power is at stake, they are willing to go there.

But what that means is that there is, at the birth of this technology, a kind of original sin that can’t really be erased.

It can’t be erased, and people are beginning to notice. And they are beginning to sue these companies over it. These companies have to have this copyrighted data to build their systems. It is fundamental to their creation. If a lawsuit bars them from using that copyrighted data, that could bring down this technology.

We’ll be right back.

So Cade, walk us through these lawsuits that are being filed against these AI companies based on the decisions they made early on to use technology as they did and the chances that it could result in these companies not being able to get the data they so desperately say they need.

These suits are coming from a wide range of places. They’re coming from computer programmers who are concerned that their computer programs have been fed into these systems. They’re coming from book authors who have seen their books being used. They’re coming from publishing companies. They’re coming from news corporations like, “The New York Times,” incidentally, which has filed a lawsuit against OpenAI and Microsoft.

News organizations that are concerned over their news articles being used to build these systems.

And here, I think it’s important to say as a matter of transparency, Cade, that your reporting is separate from that lawsuit. That lawsuit was filed by the business side of “The New York Times” by people who are not involved in your reporting or in this “Daily” episode, just to get that out of the way.

I’m assuming that you have spoken to many lawyers about this, and I wonder if there’s some insight that you can shed on the basic legal terrain? I mean, do the companies seem to have a strong case that they have a right to this information, or do companies like the “Times,” who are suing them, seem to have a pretty strong case that, no, that decision violates their copyrighted materials.

Like so many legal questions, this is incredibly complicated. It comes down to what’s called fair use, which is a part of copyright law that determines whether companies can use copyrighted data to build new things. And there are many factors that go into this. There are good arguments on the OpenAI side. There are good arguments on “The New York Times” side.

Copyright law says that can’t take my work and reproduce it and sell it to someone. That’s not allowed. But what’s called fair use does allow companies and individuals to use copyrighted works in part. They can take snippets of it. They can take the copyrighted works and transform it into something new. That is what OpenAI and others are arguing they’re doing.

But there are other things to consider. Does that transformative work compete with the individuals and companies that supplied the data that owned the copyrights?

And here, the suit between “The New York Times” company and OpenAI is illustrative. If “The New York Times” creates articles that are then used to build a chatbot, does that chatbot end up competing with “The New York Times?” Do people end up going to that chatbot for their information, rather than going to the “Times” website and actually reading the article? That is one of the questions that will end up deciding this case and cases like it.

So what would it mean for these AI companies for some, or even all of these lawsuits to succeed?

Well, if these tech companies are required to license the copyrighted data that goes into their systems, if they’re required to pay for it, that becomes a problem for these companies. We’re talking about digital data the size of the entire internet.

Licensing all that copyrighted data is not necessarily feasible. We quote the venture capital firm Andreessen Horowitz in our story where one of their lawyers says that it does not work for these companies to license that data. It’s too expensive. It’s on too large a scale.

Hmm, it would essentially make this technology economically impractical.

Exactly, so a jury or a judge or a law ruling against OpenAI, could fundamentally change the way this technology is built. The extreme case is these companies are no longer allowed to use copyrighted material in building these chatbots. And that means they have to start from scratch. They have to rebuild everything they’ve built. So this is something that, not only imperils what they have today, it imperils what they want to build in the future.

And conversely, what happens if the courts rule in favor of these companies and say, you know what, this is fair use. You were fine to have scraped this material and to keep borrowing this material into the future free of charge?

Well, one significant roadblock drops for these companies. And they can continue to gather up all that extra data, including images and sounds and videos and build increasingly powerful systems. But the thing is, even if they can access as much copyrighted material as they want, these companies may still run into a problem.

Pretty soon they’re going to run out of digital data on the internet.

That human-created data they rely on is going to dry up. They’re using up this data faster than humans create it. One research organization estimates that by 2026, these companies will run out of viable data on the internet.

Wow. Well, in that case, what would these tech companies do? I mean, where are they going to go if they’ve already scraped YouTube, if they’ve already scraped podcasts, if they’ve already gobbled up the internet and that altogether is not sufficient?

What many people inside these companies will tell you, including Sam Altman, the chief executive of OpenAI, they’ll tell you that what they will turn to is what’s called synthetic data.

And what is that?

That Is data generated by an AI model that is then used to build a better AI model. It’s AI helping to build better AI. That is the vision, ultimately, they have for the future that they won’t need all this human generated text. They’ll just have the AI build the text that will feed future versions of AI.

So they will feed the AI systems the material that the AI systems themselves create. But is that really a workable solid plan? Is that considered high-quality data? Is that good enough?

If you do this on a large scale, you quickly run into problems. As we all know, as we’ve discussed on this podcast, these systems make mistakes. They hallucinate . They make stuff up. They show biases that they’ve learned from internet data. And if you start using the data generated by the AI to build new AI, those mistakes start to reinforce themselves.

The systems start to get trapped in these cul-de-sacs where they end up not getting better but getting worse.

What you’re really saying is, these AI machines need the unique perfection of the human creative mind.

Well, as it stands today, that is absolutely the case. But these companies have grand visions for where this will go. And they feel, and they’re already starting to experiment with this, that if you have an AI system that is sufficiently powerful, if you make a copy of it, if you have two of these AI models, one can produce new data, and the other one can judge that data.

It can curate that data as a human would. It can provide the human judgment, So. To speak. So as one model produces the data, the other one can judge it, discard the bad data, and keep the good data. And that’s how they ultimately see these systems creating viable synthetic data. But that has not happened yet, and it’s unclear whether it will work.

It feels like the real lesson of your investigation is that if you have to allegedly steal data to feed your AI model and make it economically feasible, then maybe you have a pretty broken model. And that if you need to create fake data, as a result, which as you just said, kind of undermines AI’s goal of mimicking human thinking and language, then maybe you really have a broken model.

And so that makes me wonder if the folks you talk to, the companies that we’re focused on here, ever ask themselves the question, could we do this differently? Could we create an AI model that just needs a lot less data?

They have thought about other models for decades. The thing to realize here, is that is much easier said than done. We’re talking about creating systems that can mimic the human brain. That is an incredibly ambitious task. And after struggling with that for decades, these companies have finally stumbled on something that they feel works that is a path to that incredibly ambitious goal.

And they’re going to continue to push in that direction. Yes, they’re exploring other options, but those other options aren’t working.

What works is more data and more data and more data. And because they see a path there, they’re going to continue down that path. And if there are roadblocks there, and they think they can knock them down, they’re going to knock them down.

But what if the tech companies never get enough or make enough data to get where they think they want to go, even as they’re knocking down walls along the way? That does seem like a real possibility.

If these companies can’t get their hands on more data, then these technologies, as they’re built today, stop improving.

We will see their limitations. We will see how difficult it really is to build a system that can match, let alone surpass the human brain.

These companies will be forced to look for other options, technically. And we will see the limitations of these grandiose visions that they have for the future of artificial intelligence.

OK, thank you very much. We appreciate it.

Glad to be here.

Here’s what else you need to know today. Israeli leaders spent Monday debating whether and how to retaliate against Iran’s missile and drone attack over the weekend. Herzi Halevi, Israel’s Military Chief of Staff, declared that the attack will be responded to.

In Washington, a spokesman for the US State Department, Matthew Miller reiterated American calls for restraint —

^MATTHEW MILLER^ Of course, we continue to make clear to everyone that we talked to that we want to see de-escalation that we don’t want to see a wider regional war. That’s something that’s been —

— but emphasized that a final call about retaliation was up to Israel. ^MATTHEW MILLER^ Israel is a sovereign country. They have to make their own decisions about how best to defend themselves. What we always try to do —

And the first criminal trial of a former US President officially got underway on Monday in a Manhattan courtroom. Donald Trump, on trial for allegedly falsifying documents to cover up a sex scandal involving a porn star, watched as jury selection began.

The initial pool of 96 jurors quickly dwindled. More than half of them were dismissed after indicating that they did not believe that they could be impartial. The day ended without a single juror being chosen.

Today’s episode was produced by Stella Tan, Michael Simon Johnson, Muge Zaidi, and Rikki Novetsky. It was edited by Marc Georges and Liz O. Baylen, contains original music by Diane Wong, Dan Powell, and Pat McCusker, and was engineered by Chris Wood. Our theme music is by Jim Brunberg and Ben Landsverk of Wonderly.

That’s it for “The Daily.” I’m Michael Barbaro. See you tomorrow.

The Daily logo

  • April 19, 2024   •   30:42 The Supreme Court Takes Up Homelessness
  • April 18, 2024   •   30:07 The Opening Days of Trump’s First Criminal Trial
  • April 17, 2024   •   24:52 Are ‘Forever Chemicals’ a Forever Problem?
  • April 16, 2024   •   29:29 A.I.’s Original Sin
  • April 15, 2024   •   24:07 Iran’s Unprecedented Attack on Israel
  • April 14, 2024   •   46:17 The Sunday Read: ‘What I Saw Working at The National Enquirer During Donald Trump’s Rise’
  • April 12, 2024   •   34:23 How One Family Lost $900,000 in a Timeshare Scam
  • April 11, 2024   •   28:39 The Staggering Success of Trump’s Trial Delay Tactics
  • April 10, 2024   •   22:49 Trump’s Abortion Dilemma
  • April 9, 2024   •   30:48 How Tesla Planted the Seeds for Its Own Potential Downfall
  • April 8, 2024   •   30:28 The Eclipse Chaser
  • April 7, 2024 The Sunday Read: ‘What Deathbed Visions Teach Us About Living’

Hosted by Michael Barbaro

Featuring Cade Metz

Produced by Stella Tan ,  Michael Simon Johnson ,  Mooj Zadie and Rikki Novetsky

Edited by Marc Georges and Liz O. Baylen

Original music by Diane Wong ,  Dan Powell and Pat McCusker

Engineered by Chris Wood

Listen and follow The Daily Apple Podcasts | Spotify | Amazon Music

A Times investigation shows how the country’s biggest technology companies, as they raced to build powerful new artificial intelligence systems, bent and broke the rules from the start.

Cade Metz, a technology reporter for The Times, explains what he uncovered.

On today’s episode

research paper on artificial intelligence in banking sector

Cade Metz , a technology reporter for The New York Times.

A three-story building with large windows, illuminated at night.

Background reading

How tech giants cut corners to harvest data for A.I.

What to know about tech companies using A.I. to teach their own A.I.

There are a lot of ways to listen to The Daily. Here’s how.

We aim to make transcripts available the next workday after an episode’s publication. You can find them at the top of the page.

The Daily is made by Rachel Quester, Lynsea Garrison, Clare Toeniskoetter, Paige Cowett, Michael Simon Johnson, Brad Fisher, Chris Wood, Jessica Cheung, Stella Tan, Alexandra Leigh Young, Lisa Chow, Eric Krupke, Marc Georges, Luke Vander Ploeg, M.J. Davis Lin, Dan Powell, Sydney Harper, Mike Benoist, Liz O. Baylen, Asthaa Chaturvedi, Rachelle Bonja, Diana Nguyen, Marion Lozano, Corey Schreppel, Rob Szypko, Elisheba Ittoop, Mooj Zadie, Patricia Willens, Rowan Niemisto, Jody Becker, Rikki Novetsky, John Ketchum, Nina Feldman, Will Reid, Carlos Prieto, Ben Calhoun, Susan Lee, Lexie Diao, Mary Wilson, Alex Stern, Dan Farrell, Sophia Lanman, Shannon Lin, Diane Wong, Devon Taylor, Alyssa Moxley, Summer Thomad, Olivia Natt, Daniel Ramirez and Brendan Klinkenberg.

Our theme music is by Jim Brunberg and Ben Landsverk of Wonderly. Special thanks to Sam Dolnick, Paula Szuchman, Lisa Tobin, Larissa Anderson, Julia Simon, Sofia Milan, Mahima Chablani, Elizabeth Davis-Moorer, Jeffrey Miranda, Renan Borelli, Maddy Masiello, Isabella Anderson and Nina Lassam.

Cade Metz writes about artificial intelligence, driverless cars, robotics, virtual reality and other emerging areas of technology. More about Cade Metz

Advertisement

Artificial intelligence driven demand forecasting: an application to the electricity market

  • Original Research
  • Published: 17 April 2024

Cite this article

  • Marco Repetto 1 ,
  • Cinzia Colapinto   ORCID: orcid.org/0000-0003-1211-8033 2 &
  • Muhammad Usman Tariq 3  

27 Accesses

Explore all metrics

Demand forecasting with maximum accuracy is critical to business management in various fields, from finance to marketing. In today’s world, many firms have access to a lot of data that they can use to implement sophisticated models. This was not possible in the past, but it has become a reality with the advent of large-scale data analysis. However, this also requires a distributed thinking approach due to the resource-intensive nature of Deep Learning models. Forecasting power demand is of utmost importance in the energy industry, and various methods and approaches have been employed by electrical companies for predicting electricity demand. This paper proposes a novel multicriteria approach for distributed learning in energy forecasting. We use a Quadratic Goal Programming approach to construct a robust decision rule ensemble that optimizes a pre-defined loss function. Our approach is independent of the loss function’s differentiability and is also model agnostic. This formulation offers interpretability for the decision-maker and demonstrates less proclivity of regression against the mean that affects autoregressive models. Our findings contribute to the field of energy forecasting and highlight the potential of our approach for enhancing decision-making in the energy industry.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

research paper on artificial intelligence in banking sector

Similar content being viewed by others

research paper on artificial intelligence in banking sector

Machine learning and deep learning

Christian Janiesch, Patrick Zschech & Kai Heinrich

research paper on artificial intelligence in banking sector

Predictive big data analytics for supply chain demand forecasting: methods, applications, and research opportunities

Mahya Seyedan & Fereshteh Mafakheri

research paper on artificial intelligence in banking sector

Machine learning for financial forecasting, planning and analysis: recent developments and pitfalls

Helmut Wasserbacher & Martin Spindler

Ahmed, A. S., Abood, M. S., & Hamdi, M. M. (2021). Advancement of deep learning in big data and distributed systems. In 2021 3rd international congress on human-computer interaction, optimization and robotic applications (HORA) , (pp. 1–7). https://doi.org/10.1109/HORA52670.2021.9461274

Angelopoulos, D., Siskos, Y., & Psarras, J. (2019). Disaggregating time series on multiple criteria for robust forecasting: The case of long-term electricity demand in greece. European Journal of Operational Research, 275 (1), 252–265.

Article   Google Scholar  

Arif, A., Javaid, N., Anwar, M., Naeem, A., Gul, H., Fareed, S.(2020). Electricity load and price forecasting using machine learning algorithms in smart grid: A survey. In Workshops of the international conference on advanced information networking and applications , (pp. 471–483). Springer.

Arifovic, J., & Gencay, R. (2001). Using genetic algorithms to select architecture of a feedforward artiÿcial neural network. Physica A , 21 .

Armstrong, J. S. (1985). Long-range forecasting: From crystal ball to computer (2nd ed.). Wiley.

Google Scholar  

Bagdasaryan, E., Veit, A., Hua, Y., Estrin, D., & Shmatikov, V. (2020). How to backdoor federated learning. In Proceedings of the twenty third international conference on artificial intelligence and statistics , pp. 2938–2948. https://proceedings.mlr.press/v108/bagdasaryan20a.html

Bedi, J., & Toshniwal, D. (2019). Deep learning framework to forecast electricity demand. Applied Energy, 238 , 1312–1326.

Beguier, C., du Terrail, J. O., Meah, I., Andreux, M., & Tramel, E. W. (2021). Differentially private federated learning for cancer prediction. arXiv:2101.02997 [cs, stat]

Bengio, Y., Courville, A., & Vincent, P. (2013). Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35 (8), 1798–1828.

Briggs, C., Fan, Z., & Andras, P. (2020). Federated learning with hierarchical clustering of local updates to improve training on non-iid data. In 2020 international joint conference on neural networks (IJCNN) , (pp. 1–9). IEEE

Chang, C.-T. (2015). Multi-choice goal programming model for the optimal location of renewable energy facilities. Renewable and Sustainable Energy Reviews, 41 , 379–389.

Charnes, A., Cooper, W. W., & Ferguson, R. O. (1955). Optimal estimation of executive compensation by linear programming. Management Science, 1 (2), 138–151.

Chen, Y., Qin, X., Wang, J., Yu, C., & Gao, W. (2020). Fedhealth: A federated transfer learning framework for wearable healthcare. IEEE Intelligent Systems, 35 (4), 83–93.

Colapinto, C., Jayaraman, R., & Marsiglio, S. (2017). Multi-criteria decision analysis with goal programming in engineering, management and social sciences: A state-of-the art review. Annals of Operations Research, 251 (1–2), 7–40.

Costache, R., Tin, T. T., Arabameri, A., Crăciun, A., Ajin, R. S., Costache, I., Towfiqul Islam, A. R. M., Abba, S. I., Sahana, M., Avand, M., & Pham, B. T. (2022). Flash-flood hazard using deep learning based on H2O R package and fuzzy-multicriteria decision-making analysis. Journal of Hydrology, 609 , 127747. https://doi.org/10.1016/j.jhydrol.2022.127747

del Real, A. J., Dorado, F., & Durán, J. (2022). Energy demand forecasting using deep learning: Applications for the French grid. Energies, 13 (9), 2242. https://doi.org/10.3390/en13092242

Ehrgott, M., Gandibleux, X., & Hillier, F. S. (eds.) (2002). Multiple criteria optimization: State of the art annotated bibliographic surveys. In International series in operations research and management science , (vol. 52). Springer US https://doi.org/10.1007/b101915

Forootan, M. M., Larki, I., Zahedi, R., & Ahmadi, A. (2022). Machine learning and deep learning in energy systems: A review. Sustainability, 14 (8), 4832. https://doi.org/10.3390/su14084832

Grolinger, K., Capretz, M.A., & Seewald, L. (2016). Energy consumption prediction with big data: Balancing prediction accuracy and computational resources. In 2016 IEEE international congress on big data (BigData congress) , (pp. 157–164). IEEE.

He, K., Zhang, X., Ren, S., & Sun, J.(2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition , (pp. 770–778).

Hong, T., Xie, J., & Black, J. (2019). Global energy forecasting competition 2017: Hierarchical probabilistic load forecasting. International Journal of Forecasting, 35 (4), 1389–1399. https://doi.org/10.1016/j.ijforecast.2019.02.006

Hrnjica, B., & Mehr, A. D. (2020). Energy demand forecasting using deep learning. In F. Al-Turjman (ed.), Smart cities performability, cognition and security . EAI/Springer Innovations in Communication and Computing, (pp. 71–104). Springer International Publishing. https://doi.org/10.1007/978-3-030-14718-1_4 .

Hyndman, R. J., & Koehler, A. B. (2006). Another look at measures of forecast accuracy. International Journal of Forecasting, 22 (4), 679–688. https://doi.org/10.1016/j.ijforecast.2006.03.001

Jalal, A. S., Sharma, D. K., & Sikander, B. (2023). Suspect face retrieval system using multicriteria decision process and deep learning. Multimedia Tools and Applications, 82 (24), 38189–38216. https://doi.org/10.1007/s11042-023-14968-z

Jayaraman, R., Colapinto, C., Torre, D. L., & Malik, T. (2015). Multi-criteria model for sustainable development using goal programming applied to the United Arab Emirates. Energy Policy, 87 , 447–454. https://doi.org/10.1016/j.enpol.2015.09.027

Jiménez-Sánchez, A., Tardy, M., Ballester, M. A. G., Mateus, D., & Piella, G. (2021). Memory-aware curriculum federated learning for breast cancer classification. arXiv:2107.02504 [cs]

Kholod, I., Yanaki, E., Fomichev, D., Shalugin, E., Novikova, E., Filippov, E., & Nordlund, M. (2021). Open-source federated learning frameworks for IoT: A comparative review and analysis. Sensors, 21 (1), 167. https://doi.org/10.3390/s21010167

Konečnỳ, J., McMahan, B., & Ramage, D. (2015). Federated optimization: Distributed optimization beyond the datacenter. arXiv preprint arXiv:1511.03575

Konečnÿ, J., McMahan, H. B., Yu, F. X., Richtárik, P., Suresh, A. T., & Bacon, D. (2017). Federated learning: Strategies for improving communication efficiency. arXiv:1610.05492

La Torre, D., Liuzzi, D., Repetto, M., & Rocca, M. Enhancing deep learning algorithm accuracy and stability using multicriteria optimization: An application to distributed learning with MNIST digits. https://doi.org/10.1007/s10479-022-04833-x

LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521 (7553), 436–444.

Leroy, D., Coucke, A., Lavril, T., Gisselbrecht, T., & Dureau, J. (2019). Federated learning for keyword spotting. In ICASSP 2019-2019 IEEE international conference on acoustics, speech and signal processing (ICASSP) , (pp. 6341–6345). IEEE

Li, J., Ren, Y., Fang, S., Li, K., & Sun, M. (2020). Federated learning-based ultra-short term load forecasting in power internet of things. In 2020 IEEE international conference on energy internet (ICEI) , (pp. 63–68). IEEE

Lim, J. Q., & Chan, C. S. (2021). From gradient leakage to adversarial attacks in federated learning. In 2021 IEEE International Conference on Image Processing (ICIP) , (pp. 3602–3606). https://doi.org/10.1109/ICIP42928.2021.9506589

Liu, Y., Huang, A., Luo, Y., Huang, H., Liu, Y., Chen, Y., Feng, L., Chen, T., Yu, H., Yang, Q. (2020). Fedvision: An online visual object detection platform powered by federated learning. In Proceedings of the AAAI conference on artificial intelligence , (vol. 34, pp. 13172–13179).

Makridakis, S., Spiliotis, E., & Assimakopoulos, V. (2020). The M4 competition: 100,000 time series and 61 forecasting methods. International Journal of Forecasting, 36 (1), 54–74. https://doi.org/10.1016/j.ijforecast.2019.04.014

McMahan, B., Moore, E., Ramage, D., Hampson, S., & Arcas, B .A. (2017). Communication-efficient learning of deep networks from decentralized data. In Proceedings of the 20th international conference on artificial intelligence and statistics , (pp. 1273–1282). PMLR

Mohri, M., Sivek, G., & Suresh, A. T. (2019). Agnostic federated learning. In International conference on machine learning , (pp. 4615–4625). PMLR.

Perifanis, T. (2021). Forecasting energy demand with econometrics. In Mathematical modelling of contemporary electricity markets , (pp. 3–16). Elsevier. https://doi.org/10.1016/B978-0-12-821838-9.00001-3 .

Repetto, M. (2022). Multicriteria interpretability driven deep learning. Annals of Operations Research . https://doi.org/10.1007/s10479-022-04692-6

Ruder, S. (2017). An overview of multi-task learning in deep neural networks. arXiv preprint arXiv:1706.05098

Samouilidis, J. E., & Pappas, I. A. (1980). A goal programming approach to energy forecasting. European Journal of Operational Research, 5 (5), 321–331.

Saputra, Y. M., Hoang, D. T., Nguyen, D. N., Dutkiewicz, E., Mueck, M. D., & Srikanteswara, S. (2019). Energy demand prediction with federated learning for electric vehicle networks. In 2019 IEEE global communications conference (GLOBECOM) , (pp. 1–6). IEEE.

Sehovac, L., & Grolinger, K. (2020). Deep learning for load forecasting: Sequence to sequence recurrent neural networks with attention. IEEE Access, 8 , 36411–36426.

Shankar, S., Ilavarasan, P. V., Punia, S., & Singh, S. P. (2020). Forecasting container throughput with long short-term memory networks. Industrial Management & Data Systems, 120 (3), 425–441. https://doi.org/10.1108/IMDS-07-2019-0370

Smith, V., Chiang, C.-K., Sanjabi, M., & Talwalkar, A. S. (2017). Federated multi-task learning. Advances in Neural Information Processing Systems 30

Stoilova, S., & Munier, N. (2021). A novel fuzzy simus multicriteria decision-making method: An application in railway passenger transport planning. Symmetry, 13 (3), 483.

Taïk, A., & Cherkaoui, S. (2020). Electrical load forecasting using edge computing and federated learning. In ICC 2020-2020 IEEE international conference on communications (ICC) , (pp. 1–6). IEEE

Tian, Y., Sehovac, L., & Grolinger, K. (2019). Similarity-based chained transfer learning for energy forecasting with big data. IEEE Access, 7 , 139895–139908.

Van Essen, B., Kim, H., Pearce, R., Boakye, K., & Chen, B. (2015). Lbann: Livermore big artificial neural network hpc toolkit. In Proceedings of the workshop on machine learning in high-performance computing environments , (pp. 1–6).

Warnat-Herresthal, S., Schultze, H., Shastry, K. L., Manamohan, S., Mukherjee, S., Garg, V., Sarveswara, R., Händler, K., Pickkers, P., Aziz, N. A., Ktena, S., Tran, F., Bitzer, M., Ossowski, S., Casadei, N., Herr, C., Petersheim, D., Behrends, U., Kern, F., ... Schultze, J. L. (2021). Swarm Learning for decentralized and confidential clinical machine learning. Nature, 594 (7862), 265–270. https://doi.org/10.1038/s41586-021-03583-3

Wu, X., Liang, Z., & Wang, J. (2020). Fedmed: A federated learning framework for language modeling. Sensors, 20 (14), 4048.

Yager, R. R. (1988). On ordered weighted averaging aggregation operators in multicriteria decision–making. IEEE Transactions on Systems, Man, and Cybernetics, 18 (1), 183–190. https://doi.org/10.1109/21.87068

Yang, Y., Chen, Y., Wang, Y., Li, C., & Li, L. (2016). Modelling a combined method based on anfis and neural network improved by de algorithm: A case study for short-term electricity demand forecasting. Applied Soft Computing, 49 , 663–675.

Zainab, A., Syed, D., Ghrayeb, A., Abu-Rub, H., Refaat, S. S., Houchati, M., Bouhali, O., & Lopez, S. B. (2021). A multiprocessing-based sensitivity analysis of machine learning algorithms for load forecasting of electric power distribution system. IEEE Access, 9 , 31684–31694.

Zhang, B., Xu, X., Xing, H., & Li, Y. (2017). A deep learning based framework for power demand forecasting with deep belief networks. In 2017 18th international conference on parallel and distributed computing, applications and technologies (PDCAT) , (pp. 191–195). https://doi.org/10.1109/PDCAT.2017.00039

Zhao, Y., Li, M., Lai, L., Suda, N., Civin, D., & Chandra, V. (2018). Federated learning with non-iid data. arXiv preprint arXiv:1806.00582

Zhu, L., & Han, S. (2020). Deep leakage from gradients. In Q. Yang, L. Fan, H. Yu, (eds.), Federated learning: Privacy and incentive . Lecture notes in computer science, (pp. 17–31). Springer International Publishing. https://doi.org/10.1007/978-3-030-63076-8_2

Download references

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sector

Author information

Authors and affiliations.

CertX, Fribourg, Switzerland

Marco Repetto

IPAG Business school, Nice, France and Ca’ Foscari University of Venice, Venice, Italy

Cinzia Colapinto

Abu Dhabi University, Abu Dhabi, United Arab Emirates

Muhammad Usman Tariq

You can also search for this author in PubMed   Google Scholar

Contributions

MR has contributed to Sects. 1 , 3 , 4 , 5 , and 6 ; CC has contributed to Sects. 1 , 2 , 5 , and 6 ; MUT has contributed to Sect. 1 .

Corresponding author

Correspondence to Cinzia Colapinto .

Ethics declarations

Conflict of interest.

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Ethical approval

This article does not contain any studies with human participants performed by any of the authors.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Repetto, M., Colapinto, C. & Tariq, M.U. Artificial intelligence driven demand forecasting: an application to the electricity market. Ann Oper Res (2024). https://doi.org/10.1007/s10479-024-05965-y

Download citation

Received : 19 September 2023

Accepted : 21 March 2024

Published : 17 April 2024

DOI : https://doi.org/10.1007/s10479-024-05965-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Demand forecasting
  • Federated learning
  • Deep learning
  • Multiple criteria decision making
  • Goal programming
  • Electricity forecasting
  • Find a journal
  • Publish with us
  • Track your research

Cookies on GOV.UK

We use some essential cookies to make this website work.

We’d like to set additional cookies to understand how you use GOV.UK, remember your settings and improve government services.

We also use cookies set by other sites to help us deliver content from their services.

You have accepted additional cookies. You can change your cookie settings at any time.

You have rejected additional cookies. You can change your cookie settings at any time.

research paper on artificial intelligence in banking sector

  • Business and industry
  • Science and innovation
  • Artificial intelligence

CMA outlines growing concerns in markets for AI Foundation Models

The CMA has outlined 3 key risks to effective competition on AI Foundation Models and has set out plans for further action in the market.

research paper on artificial intelligence in banking sector

  • CMA outlines growing concerns regarding foundation models in CEO speech and update paper , as the market continues to develop at a “whirlwind pace” 
  • CMA identifies 3 key interlinked risks to fair, effective, and open competition  
  • CMA CEO: “When we started this work, we were curious. Now, we have real concerns.” 

The move from the Competition and Markets Authority (CMA) follows its initial report on AI Foundation Models (FMs) last year. The report proposed a set of principles to help sustain innovation and guide these markets toward positive outcomes for businesses, consumers, and the wider economy.   

Speaking at a conference in Washington DC, Chief Executive Officer Sarah Cardell shared highlights from the CMA’s update to its FMs work. In her remarks, Sarah Cardell describes the transformative promise of FMs as a potential “paradigm shift” for societies and economies. She also outlines a range of fast-moving developments across FM markets which, underpinned by the CMA’s deepening understanding of the FM ecosystem, have prompted a marked increase in concern.  

The speech highlights the growing presence across FM markets of a small number of incumbent technology firms which already hold positions of market power in many of today’s most important digital markets. The firms have strong positions in both the development of FMs (including through the supply of critical inputs like compute, data, and talent), and in the deployment of models, through key access points or routes to market, like apps and platforms.  

The CMA is concerned that some firms may have both the ability and the incentive to shape these markets in their own interests – both to protect existing market power and to extend it into new areas. This could profoundly impact fair, open, and effective competition in FM-related markets, ultimately harming businesses and consumers, for example through reduced choice, lower quality, and higher prices, as well as stunting the flow of potentially unprecedented innovation and wider economic benefits from AI.  

The CMA’s update paper, to be published today, identifies an “interconnected web” of over 90 partnerships and strategic investments involving the same firms: Google, Apple, Microsoft, Meta, Amazon, and Nvidia (which is the leading supplier of AI accelerator chips). The CMA recognises the huge wealth of resources, expertise and innovation capability these large firms can bring to bear, and the role they will likely have in FM markets, as well as the fact that partnerships and arrangements of this kind can play a pro-competitive role in the technology ecosystem.  

However, the CMA cautions that powerful partnerships and integrated firms should not reduce rival firms’ ability to compete, nor should they be used to insulate powerful firms from competition. Maintaining diversity and choice in the market is also vital for safeguarding against the risk of over-dependence on a handful of major firms – particularly considering the breadth of potential use for FMs, across all sectors of the economy, such as finance, healthcare, education, defence, transport, and retail. The benefits of AI for businesses and consumers are much more likely to be realised in a world where the most powerful technology firms are subject to fair, open, and effective competition – both from potential challengers and between themselves – rather than one where they are able to leverage FMs to further entrench and extend their existing positions of power in digital markets.  

 Reflecting on the decade of experience the CMA has gained in digital markets, where “winner takes all dynamics” led to the rise of a small number of powerful platforms, Sarah Cardell says the CMA is “determined to apply the lessons of history” at this pivotal moment in the emergence of a new, potentially transformative technology.  

The speech and update paper highlight 3 key interlinked risks to fair, open, and effective competition:   

  • firms controlling critical inputs for developing FMs may restrict access to shield themselves from competition 
  • powerful incumbents could exploit their positions in consumer or business facing markets to distort choice in FM services and restrict competition in deployment 
  • partnerships involving key players could exacerbate existing positions of market power through the value chain 

The CMA’s update paper provides details on how each risk would be mitigated by its principles, as well as the actions the CMA is taking now, and considering taking in the near future, to address these concerns. This includes existing measures, like market investigations and merger review, but also consideration of developments in FMs as the CMA decides which digital activities to prioritise for investigation under the Digital Markets, Competition and Consumers Bill. The speech also highlights examples of current relevant work, such as the CMA’s ongoing cloud services market investigation , which includes a forward-looking assessment of the potential impact of FMs on competition in cloud services, and its review of Microsoft’s partnership with OpenAI to understand how it could affect competition in various parts of the ecosystem. 

Sarah Cardell notes that the CMA is “keeping very close watch on current and emerging partnerships”. This includes use of merger control powers to assess whether, and in what circumstances, these kinds of arrangements fall within the merger rules and whether they raise competition concerns – particularly given the complex and opaque nature of some partnerships and arrangements.  

Sarah Cardell remarks, “By stepping up our merger review, we hope to gain more clarity and that clarity will also benefit the businesses themselves.”  

Sarah Cardell, CEO of the CMA, said:

When we started this work, we were curious. Now, with a deeper understanding and having watched developments very closely, we have real concerns.   The essential challenge we face is how to harness this immensely exciting technology for the benefit of all, while safeguarding against potential exploitation of market power and unintended consequences.   We’re committed to applying the principles we have developed, and to using all legal powers at our disposal – now and in the future – to ensure that this transformational and structurally critical technology delivers on its promise. 

The CMA’s more detailed technical update report on FMs, including its finalised principles, will be published next week.  

For more information, visit the AI Foundation Models case page .  

Notes to editors  

Sarah Cardell’s remarks can be read in full here. The CMA’s update paper will be published later today and will be available on the AI Foundation Models case page . A more detailed technical update report will be published next week.  

The CMA has not yet taken any provisional decisions on which areas to prioritise for investigation under the new DMCC Bill, and any designation would be subject to a prior investigation. 

A CMA independent group of panel experts is already examining the conditions of competition in the provision of public cloud infrastructure services, as part of the ongoing Cloud Market Investigation.  

All media enquiries should be directed to the CMA press office by email on  [email protected] , or by phone on 020 3738 6460.

Share this page

The following links open in a new tab

  • Share on Facebook (opens in new tab)
  • Share on Twitter (opens in new tab)

Is this page useful?

  • Yes this page is useful
  • No this page is not useful

Help us improve GOV.UK

Don’t include personal or financial information like your National Insurance number or credit card details.

To help us improve GOV.UK, we’d like to know more about your visit today. We’ll send you a link to a feedback form. It will take only 2 minutes to fill in. Don’t worry we won’t send you spam or share your email address with anyone.

  • Frontiers in Computer Science
  • Computer Security
  • Research Topics

Cyber Security Prevention, Defenses Driven by AI, and Mathematical Modelling and Simulation Tools

Total Downloads

Total Views and Downloads

About this Research Topic

The current dynamic innovation, research, and development in the fields of Artificial Intelligence (AI), Ultra-Smart Computation, Applied Mathematics, Modeling and Simulation, and Fast Internet, promote the creation of Automated Ultra Smart Cyberspace, which opens a new horizon of opportunities for ...

Keywords : Cyber Security, AI, Computing, Internet, Software, Mathematical modeling, Simulation tools

Important Note : All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

Topic Editors

Topic coordinators, recent articles, submission deadlines, participating journals.

Manuscripts can be submitted to this Research Topic via the following journals:

total views

  • Demographics

No records found

total views article views downloads topic views

Top countries

Top referring sites, about frontiers research topics.

With their unique mixes of varied contributions from Original Research to Review Articles, Research Topics unify the most influential researchers, the latest key findings and historical advances in a hot research area! Find out more on how to host your own Frontiers Research Topic or contribute to one as an author.

THE DECODER

Artificial Intelligence: News, Business, Research

' data-src=

Udio investor Will.i.am says AI music will fuel creativity and authenticity in the industry

Udio investor Will.i.am says AI music will fuel creativity and authenticity in the industry

Musician and AI investor Will.i.am sees the development of AI in the music industry as an opportunity for more creativity and expression. He believes that AI-generated music will allow artists to get back to the basics.

In an interview, will.i.am talks about the future of the music industry in the age of generative AI. The debate is topical, as two capable AI music generators, Suno and Udio , have just launched. Will.i.am is an investor in Udio.

Recently, around 200 prominent musicians wrote an open letter protesting against these generators , which they claim are an attack on human creativity. But will.i.am sees them more as a liberation. He is convinced that AI-generated music will fundamentally change the industry and lead to more authenticity and creativity.

Will.i.am argues that the music industry is already heavily influenced by algorithms that dictate how a song must be structured to be successful. Artists have to adapt their songs to the specifications of streaming services like Spotify to get a high number of views.

Check your inbox or spam folder to confirm your subscription.

Artists are forced to adapt their songs to the specifications of streaming services like Spotify to get a high number of views. Social media platforms like TikTok limit creativity by forcing artists to make their songs work in 15 seconds to go viral. This trend predates the introduction of generative AI music services.

"Who is going to make better algorithmic music? People or AI?" asks will.i.am. The musician believes that the introduction of AI-generated music will lead to artists focusing on the essentials again: emotion, creativity and authenticity.

"I say thank AI Music for coming, because it's going to wake up a whole new generation of awesome music expressors," he says.

Label for human music

Will.i.am is convinced that in the future there will be a clear distinction between AI-generated and man-made music - similar to the distinction between organic and genetically modified food.

Consumers will specifically look for "man-made" music in the same way they ask for organic oranges today, he believes. He sees this as an opportunity for the industry to refocus on what music is all about: expressing passion, pain, joy, hopes and dreams - in a way that only humans can.

AI safety alignment can make language models more deceptive, says Anthropic study

AI safety alignment can make language models more deceptive, says Anthropic study

AI will be able to mimic that, but there will always be a demand for authentic music made by humans, according to will.i.am. "AI is not going to out-love us," he says.

  • Will.i.am, musician and AI investor, believes that AI-generated music will lead to more creativity and authenticity in the music industry, despite protests from some artists who see it as a threat to human creativity.
  • The music industry is already heavily influenced by algorithms from streaming services like Spotify and social media platforms like TikTok that dictate how songs must be structured to be successful, limiting artists' creativity.
  • Will.i.am predicts that in the future there will be a clear distinction between AI-generated and human-made music, similar to the difference between organic and genetically modified food, and that consumers will specifically seek out authentic, emotional music made by humans.

Udio is the latest AI music generator and you can generate 1200 free songs per month

research paper on artificial intelligence in banking sector

Bank details

Us air force successfully tests ai-controlled fighter jets in simulated dogfights.

research paper on artificial intelligence in banking sector

To avoid AI-driven "knowledge collapse", humans must actively preserve specialized expertise

research paper on artificial intelligence in banking sector

Tailored AI counter-evidence can reduce belief in conspiracy theories, study finds

research paper on artificial intelligence in banking sector

  • Publish with us
  • AI research
  • AI in practice
  • AI and society

Communications in Humanities Research

- The Open Access Proceedings Series for Conferences

Vol. 16, 28 November 2023

Challenges and Opportunities of College Students’ Translation Education in the Artificial Intelligence Era

* Author to whom correspondence should be addressed.

All crafts and professions have been impacted by artificial intelligence’s explosive growth in the twenty-first century. The translation industry is no exception, by contrast, it’s even more affected than others. Artificial intelligence has brought a variety of opportunities to the development of the translation industry but also faced unprecedented challenges. Students who are majoring in translation have to face the structural changes brought by machine translation to the language service industry. Under this change, educators must realize the importance of cultivating excellent college students’ translation technology abilities. Therefore, this paper studies the opportunities and challenges of the cultivation of college students’ translation technology ability in the context of artificial intelligence, as well as how to make appropriate responses to these opportunities and challenges. According to research, it found in the face of the impact of artificial intelligence, the challenges of cultivating college students’ translation ability are mainly in the aspects of ethics and resource competition. However, the complexity and flexibility required by cross-cultural communication can’t be replaced by machines at present, which is exactly the opportunity to cultivate the translation ability of college students. Translation is an important way of international communication. In the era of artificial intelligence, exploring how to make good use of the advantages of man-machine combination can not only improve their own ability but also build a more convenient and stable bridge for communication between countries.

artificial intelligence, translation skills, translator education, challenges opportunities

1. Wang Junsong, Xiao Weiqing & Cui Qiliang. (2023). Technology-driven Translation Mode in the era of Artificial Intelligence: Transmutation, Motivation and Enlightenment. Shanghai Translation (04), 14-19.

2. Liu Jingyu & Liu Lingdi.(2022). Research on the Detailed Factors affecting College Students’ Translation and Optimization Strategies in the Context of Artificial Intelligence... (eds.) Proceedings of the 2022 Forum on the Development of Higher Education in the New Era (pp.409-411).

3. Mou Xiaoqing.(2022). Research on Technical ability training of Chinese translation talents in the era of artificial Intelligence. Journal of Shandong University of Technology (Social Science Edition)(03),64-69.

4. Chai Mingjiong.(2017).Understand the Status quo, Improve the quality, Develop and Move forward——Future Development of the Translation profession. Oriental Translation (06),4-8.

5. Mu Lei.(2012).Translation Professionalization and professional Translation Education. Chinese translation(04),13-14.

6. Bai Yu.(2023).The New Development Trend of Translation Education in the era of Artificial Intelligence. Journal of Lanzhou University of Arts and Sciences(Social science edition)(04),72-76. doi:10.13805/j.cnki.2095-7009.2023.04.013.

7. Feng Quangong & Cui Qiliang.(2016).Post-translation editorial study: Focus on dialysis and development trends. Shanghai translation(06),67-74+89+94.

8. Cui Qiliang.(2014).On the post-translation editing of machine translation. Chinese translation(06),68-73.

9. Chen Wei.(2020).The deconstruction of translators’ subjectivity by machine translation——On the future foothold of artificial translation. Foreign language research(02),76-83. doi:10.13978/j.cnki.wyyj.2020.02.012.

10. Luo Huazhen,Pan Zhengqin & Yi Yongzhong.(2017).Analysis of the development status and prospect of artificial intelligence translation. The electronic world(21),21-23. doi:10.19353/j.cnki.dzsj.2017.21.007.

11. Hu Kaibao & Li Yi.(2016).Research of Machine Translation Characteristics and its Relationship with Manual Translation. Chinese translation(05),10-14.

12. Lv Lisong & Mu Lei.(2007).Computer-aided Translation Technology and Translation Teaching. Foreign language(03),35-43.

13. Fu Jingmin & Xie Sha.(2015).The Development of Translation Technology and Translation Teaching. Audio-visual teaching of foreign languages(06),37-41.

14. Mu Lei & Zou Bin.(2015).On the Training Mode of Business Translation talents——Research and Teflection on the relevant Journals and Dissertations in the Mainland. Chinese foreign language(04),54-62. doi:10.13564/j.cnki.issn.1672-9382.2015.04.009.

15. Deng Zhiwen.(2021).Subject Communication Alienation and Modern Loss-The SpirituaEcological Risk of Artificial Intelligence. Journal of the Beijing Institute of Technology (Social science edition)(02),173-180. doi:10.15918/j.jbitss1009-3370.2021.2484.

16. Deng Yuqi.(2006).Research on the Development and Reform of the Translation Teaching in China(master’s thesis, Hunan Normal University).https://kns-cnki-net-443.webvpn.nbt.edu.cn/kcms2/article/abstract?v=DwbYnX8C4XP4ZjJ8FNWz3WxyHJnlh6o1Vr1hweQEB-hgzG-uHQdeS69aNsbhOxsdssqWJejMXcEYmgHkTgSYoyENs-2Gm_EQoLD4sk0sTwbIoRvE5KUUhnS5MmMFu5wp&uniplatform=NZKPT&language=CHS

17. Zhu Yifan & Guan Xinchao.(2019).Translator Education in the Al Era: Challenges and Opportunities. Journal of Shanghai Jiao Tong University(Philosophy and Social Sciences edition)(04),37-45. doi:10.13806/j.cnki.issn1008-7095.2019.04.004.

18. Peng Bing, Hu Jingpu & Liu Huaiyuan.(2021).A SWOT Analysis of Human Translators in Translation Industry Under theBackground of Artificial Intelligence. Journal of Changsha University(06),93-98.

19. Zhao Bi.(2019). A Review of the Forum on “Challenge of Artificial Intelligence and Development of Translator Education”Foreign Languages (Journal of Shanghai International Studies University)(06),110-112.

20. Ding Daqin & Liu Hui.(2022).The Reform of Traditional Training Mode of Translators and Interpreters Based on Artificial Intelligence. Journal of Anhui University of Science and Technology(Social science edition)(02),99-103.

Data Availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. Authors who publish this series agree to the following terms:

1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this series.

2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this series.

3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See Open Access Instruction).

Copyright © 2023 EWA Publishing. Unless Otherwise Stated

research paper on artificial intelligence in banking sector

  • Login To RMS System
  • About JETIR URP
  • About All Approval and Licence
  • Conference/Special Issue Proposal
  • Book and Dissertation/Thesis Publication
  • How start New Journal & Software
  • Best Papers Award
  • Mission and Vision
  • Reviewer Board
  • Join JETIR URP
  • Call For Paper
  • Research Areas
  • Publication Guidelines
  • Sample Paper Format
  • Submit Paper Online
  • Processing Charges
  • Hard Copy and DOI Charges
  • Check Your Paper Status
  • Current Issue
  • Past Issues
  • Special Issues
  • Conference Proposal
  • Recent Conference
  • Published Thesis

Contact Us Click Here

Whatsapp contact click here, published in:.

Volume 11 Issue 4 April-2024 eISSN: 2349-5162

UGC and ISSN approved 7.95 impact factor UGC Approved Journal no 63975

Unique identifier.

Published Paper ID: JETIR2404756

Registration ID: 537351

Page Number

Post-publication.

  • Downlaod eCertificate, Confirmation Letter
  • editor board member
  • JETIR front page
  • Journal Back Page
  • UGC Approval 14 June W.e.f of CARE List UGC Approved Journal no 63975

Share This Article

Important links:.

  • Call for Paper
  • Submit Manuscript online

research paper on artificial intelligence in banking sector

  • Bhagya Shree N
  • Dr. Krishnareddy K R
  • Prof. Ramesh B E
  • Dr. Aravinda T V

Cite This Article

2349-5162 | Impact Factor 7.95 Calculate by Google Scholar An International Scholarly Open Access Journal, Peer-Reviewed, Refereed Journal Impact Factor 7.95 Calculate by Google Scholar and Semantic Scholar | AI-Powered Research Tool, Multidisciplinary, Monthly, Multilanguage Journal Indexing in All Major Database & Metadata, Citation Generator

Publication Details

Download paper / preview article.

research paper on artificial intelligence in banking sector

Download Paper

Preview this article, download pdf, print this page.

research paper on artificial intelligence in banking sector

Impact Factor:

Impact factor calculation click here current call for paper, call for paper cilck here for more info important links:.

  • Follow Us on

research paper on artificial intelligence in banking sector

  • Developed by JETIR

IMAGES

  1. (PDF) A STUDY OF APPLICATIONS OF ARTIFICIAL INTELLIGENCE IN BANKING AND

    research paper on artificial intelligence in banking sector

  2. (PDF) ARTIFICIAL INTELLIGENCE IN BANKING SECTOR

    research paper on artificial intelligence in banking sector

  3. (PDF) Artificial Intelligence in Financial Services and Banking Industry

    research paper on artificial intelligence in banking sector

  4. (PDF) Banking 4.0: -The Influence of Artificial Intelligence on the

    research paper on artificial intelligence in banking sector

  5. Data and Artificial Intelligence in Banking

    research paper on artificial intelligence in banking sector

  6. Artificial intelligence in the banking sector

    research paper on artificial intelligence in banking sector

VIDEO

  1. Banking and Financial Industry Trends applying AI and Intelligent Automation

  2. Solution of Artificial Intelligence Question Paper || AI || 843 Class 12 || CBSE Board 2023-24

  3. Artificial Intelligence in Banking Sector Dr R Mangaiyarkarsi Department of Commerce

  4. Artificial Intelligence and the banking industry

  5. Introduction to Expert Systems (IT)

  6. Solution of Artificial Intelligence Question Paper || AI || 843 Class 12 || CBSE Board 2022-23

COMMENTS

  1. AI Index: State of AI in 13 Charts

    This year's AI Index — a 500-page report tracking 2023's worldwide trends in AI — is out.. The index is an independent initiative at the Stanford Institute for Human-Centered Artificial Intelligence (HAI), led by the AI Index Steering Committee, an interdisciplinary group of experts from across academia and industry. This year's report covers the rise of multimodal foundation models ...

  2. Artificial intelligence

    Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems.It is a field of research in computer science that develops and studies methods and software which enable machines to perceive their environment and uses learning and intelligence to take actions that maximize their chances of achieving defined goals.

  3. Modeling the global economic impact of AI

    New research from the McKinsey Global Institute attempts to simulate the impact of AI on the world economy. First, it builds on an understanding of the behavior of companies and the dynamics of various sectors to develop a bottom-up view of how to adopt and absorb AI technologies. Second, it takes into account the likely disruptions that ...

  4. Applications of artificial intelligence

    Artificial intelligence (AI) has been used in applications throughout industry and academia. Similar to electricity or computers, AI serves as a general-purpose technology that has numerous applications. Its applications span language translation, image recognition, decision-making, credit scoring, e-commerce and various other domains.AI which accommodates such technologies as machines being ...

  5. The Over-Concentration of Innovation and Firm-Specific ...

    The development of the artificial intelligence (AI) landscape has been impressive in virtually all economic sectors in recent years. Our study discusses the over-concentration of AI knowledge (OCAIK) as the origin of dominance over the global AI industry by a small number of companies and universities that deploy the needed resources to develop and use cutting edge, inimitable AI knowledge ...

  6. (PDF) Harnessing the Power of Artificial Intelligence and Machine

    This book contains a selection of the best papers of the 33rd Benelux Conference on Artificial Intelligence, BNAIC/ BENELEARN 2021, held in Esch-sur-Alzette, Luxembourg, in November 2021.

  7. A.I.'s Original Sin

    A Times investigation found that tech giants altered their own rules to train their newest artificial intelligence systems. Hosted by Michael Barbaro. Featuring Cade Metz. Produced by Stella Tan ...

  8. Bridging Skill Gaps

    The study's primary purpose was to review studies on the role of artificial intelligence in market performance. Artificial intelligence significantly impacts market performance by providing data ...

  9. The Impact and Potential of Artificial Intelligence in Healthcare: A

    By analysing how healthcare settings integrate machine learning, natural language processing, and computer vision, it becomes clear that using AI can lead to boosted patient outcomes, improved efficiency, and greater accuracy. Abstract: Healthcare is experiencing a technological revolution thanks to the transformative impact of Artificial Intelligence (AI). In this research paper, we explore ...

  10. AI Driven Innovations in Cyber Security

    An in-depth analysis of the current landscape of AI applications in cybersecurity, looking at different AI techniques, such as machine learning, deep learning, natural language processing and anomaly detection, and their role in improving threat detection, incident response and vulnerability management. Abstract: The explosion of cyber threats presents a huge challenge to traditional cyber ...

  11. Chatbot

    A chatbot (originally chatterbot) is a software application or web interface that is designed to mimic human conversation through text or voice interactions. Modern chatbots are typically online and use generative artificial intelligence systems that are capable of maintaining a conversation with a user in natural language and simulating the way a human would behave as a conversational partner.

  12. Artificial intelligence driven demand forecasting: an ...

    Demand forecasting with maximum accuracy is critical to business management in various fields, from finance to marketing. In today's world, many firms have access to a lot of data that they can use to implement sophisticated models. This was not possible in the past, but it has become a reality with the advent of large-scale data analysis. However, this also requires a distributed thinking ...

  13. CMA outlines growing concerns in markets for AI Foundation Models

    11 April 2024. CMA outlines growing concerns regarding foundation models in CEO speech and update paper, as the market continues to develop at a "whirlwind pace". CMA identifies 3 key ...

  14. Cyber Security Prevention, Defenses Driven by AI, and ...

    The current dynamic innovation, research, and development in the fields of Artificial Intelligence (AI), Ultra-Smart Computation, Applied Mathematics, Modeling and Simulation, and Fast Internet, promote the creation of Automated Ultra Smart Cyberspace, which opens a new horizon of opportunities for government, business, academia, and industry worldwide. The ubiquitous and pervasive nature of ...

  15. Udio investor Will.i.am says AI music will fuel creativity and

    Summary. Will.i.am, musician and AI investor, believes that AI-generated music will lead to more creativity and authenticity in the music industry, despite protests from some artists who see it as a threat to human creativity. The music industry is already heavily influenced by algorithms from streaming services like Spotify and social media ...

  16. Challenges and Opportunities of College Students' Translation Education

    All crafts and professions have been impacted by artificial intelligence's explosive growth in the twenty-first century. The translation industry is no exception, by contrast, it's even more affected than others. Artificial intelligence has brought a variety of opportunities to the development of the translation industry but also faced unprecedented challenges.

  17. Robotic Model for Autonomous Car based on AI

    Abstract. Abstract—The integration of robotics and artificial intelligence (AI) has revolutionized the automotive industry, particularly in the development of autonomous vehicles. This project proposes a novel robotic model for an autonomous car, leveraging advanced AI techniques to enhance navigation, decision-making, and overall safety.

  18. Branka Hadji Misheva

    Project Lead: Innovation and Research Project: Towards Explainable Artificial Intelligence and Machine Learning in Credit Risk Management Grant Holder: COST Action: Fintech and Artificial Intelligence in Finance - Towards a transparent financial industry