• Resources Home 🏠
  • Try SciSpace Copilot
  • Search research papers
  • Add Copilot Extension
  • Try AI Detector
  • Try Paraphraser
  • Try Citation Generator
  • April Papers
  • June Papers
  • July Papers

SciSpace Resources

AI for thesis writing — Unveiling 7 best AI tools

Madalsa

Table of Contents

Writing a thesis is akin to piecing together a complex puzzle. Each research paper, every data point, and all the hours spent reading and analyzing contribute to this monumental task.

For many students, this journey is a relentless pursuit of knowledge, often marked by sleepless nights and tight deadlines.

Here, the potential of AI for writing a thesis or research papers becomes clear: artificial intelligence can step in, not to take over but to assist and guide.

Far from being just a trendy term, AI is revolutionizing academic research, offering tools that can make the task of thesis writing more manageable, more precise, and a little less overwhelming.

In this article, we’ll discuss the impact of AI on academic writing process, and articulate the best AI tools for thesis writing to enhance your thesis writing process.

The Impact of AI on Thesis Writing

Artificial Intelligence offers a supportive hand in thesis writing, adeptly navigating vast datasets, suggesting enhancements in writing, and refining the narrative.

With the integration of AI writing assistant, instead of requiring you to manually sift through endless articles, AI tools can spotlight the most pertinent pieces in mere moments. Need clarity or the right phrasing? AI-driven writing assistants are there, offering real-time feedback, ensuring your work is both articulative  and academically sound.

AI tools for thesis writing harness Natural Language Processing (NLP) to generate content, check grammar, and assist in literature reviews. Simultaneously, Machine Learning (ML) techniques enable data analysis, provide personalized research recommendations, and aid in proper citation.

And for the detailed tasks of academic formatting and referencing? AI streamlines it all, ensuring your thesis meets the highest academic standards.

However, understanding AI's role is pivotal. It's a supportive tool, not the primary author. Your thesis remains a testament to your unique perspective and voice.

AI for writing thesis is there to amplify that voice, ensuring it's heard clearly and effectively.

How AI tools supplement your thesis writing

AI tools have emerged as invaluable allies for scholars. With just a few clicks, these advanced platforms can streamline various aspects of thesis writing, from data analysis to literature review.

Let's explore how an AI tool can supplement and transform your thesis writing style and process.

Efficient literature review : AI tools can quickly scan and summarize vast amounts of literature, making the process of literature review more efficient. Instead of spending countless hours reading through papers, researchers can get concise summaries and insights, allowing them  to focus on relevant content.

Enhanced data analysis : AI algorithms can process and analyze large datasets with ease, identifying patterns, trends, and correlations that might be difficult or time-consuming for humans to detect. This capability is especially valuable in fields with massive datasets, like genomics or social sciences.

Improved writing quality : AI-powered writing assistants can provide real-time feedback on grammar, style, and coherence. They can suggest improvements, ensuring that the final draft of a research paper or thesis is of high quality.

Plagiarism detection : AI tools can scan vast databases of academic content to ensure that a researcher's work is original and free from unintentional plagiarism .

Automated citations : Managing and formatting citations is a tedious aspect of academic writing. AI citation generators  can automatically format citations according to specific journal or conference standards, reducing the chances of errors.

Personalized research recommendations : AI tools can analyze a researcher's past work and reading habits to recommend relevant papers and articles, ensuring that they stay updated with the latest in their field.

Interactive data visualization : AI can assist in creating dynamic and interactive visualizations, making it easier for researchers to present their findings in a more engaging manner.

Top 7 AI Tools for Thesis Writing

The academic field is brimming with AI tools tailored for academic paper writing. Here's a glimpse into some of the most popular and effective ones.

Here we'll talk about some of the best ai writing tools, expanding on their major uses, benefits, and reasons to consider them.

If you've ever been bogged down by the minutiae of formatting or are unsure about specific academic standards, Typeset is a lifesaver.

AI-for-thesis-writing-Typeset

Typeset specializes in formatting, ensuring academic papers align with various journal and conference standards.

It automates the intricate process of academic formatting, saving you from the manual hassle and potential errors, inflating your writing experience.

An AI-driven writing assistant, Wisio elevates the quality of your thesis content. It goes beyond grammar checks, offering style suggestions tailored to academic writing.

AI-for-thesis-writing-Wisio

This ensures your thesis is both grammatically correct and maintains a scholarly tone. For moments of doubt or when maintaining a consistent style becomes challenging, Wisio acts as your personal editor, providing real-time feedback.

Known for its ability to generate and refine thesis content using AI algorithms, Texti ensures logical and coherent content flow according to the academic guidelines.

AI-for-thesis-writing-Texti

When faced with writer's block or a blank page, Texti can jumpstart your thesis writing process, aiding in drafting or refining content.

JustDone is an AI for thesis writing and content creation. It offers a straightforward three-step process for generating content, from choosing a template to customizing details and enjoying the final output.

AI-for-thesis-writing-Justdone

JustDone AI can generate thesis drafts based on the input provided by you. This can be particularly useful for getting started or overcoming writer's block.

This platform can refine and enhance the editing process, ensuring it aligns with academic standards and is free from common errors. Moreover, it can process and analyze data, helping researchers identify patterns, trends, and insights that might be crucial for their thesis.

Tailored for academic writing, Writefull offers style suggestions to ensure your content maintains a scholarly tone.

AI-for-thesis-writing - Writefull

This AI for thesis writing provides feedback on your language use, suggesting improvements in grammar, vocabulary, and structure . Moreover, it compares your written content against a vast database of academic texts. This helps in ensuring that your writing is in line with academic standards.

Isaac Editor

For those seeking an all-in-one solution for writing, editing, and refining, Isaac Editor offers a comprehensive platform.

AI-for-thesis-writing - Isaac-Editor

Combining traditional text editor features with AI, Isaac Editor streamlines the writing process. It's an all-in-one solution for writing, editing, and refining, ensuring your content is of the highest quality.

PaperPal , an AI-powered personal writing assistant, enhances academic writing skills, particularly for PhD thesis writing and English editing.

AI-for-thesis-writing - PaperPal

This AI for thesis writing offers comprehensive grammar, spelling, punctuation, and readability suggestions, along with detailed English writing tips.

It offers grammar checks, providing insights on rephrasing sentences, improving article structure, and other edits to refine academic writing.

The platform also offers tools like "Paperpal for Word" and "Paperpal for Web" to provide real-time editing suggestions, and "Paperpal for Manuscript" for a thorough check of completed articles or theses.

Is it ethical to use AI for thesis writing?

The AI for writing thesis has ignited discussions on authenticity. While AI tools offer unparalleled assistance, it's vital to maintain originality and not become overly reliant. Research thrives on unique contributions, and AI should be a supportive tool, not a replacement.

The key question: Can a thesis, significantly aided by AI, still be viewed as an original piece of work?

AI tools can simplify research, offer grammar corrections, and even produce content. However, there's a fine line between using AI as a helpful tool and becoming overly dependent on it.

In essence, while AI offers numerous advantages for thesis writing, it's crucial to use it judiciously. AI should complement human effort, not replace it. The challenge is to strike the right balance, ensuring genuine research contributions while leveraging AI's capabilities.

Wrapping Up

Nowadays, it's evident that AI tools are not just fleeting trends but pivotal game-changers.

They're reshaping how we approach, structure, and refine our theses, making the process more efficient and the output more impactful. But amidst this technological revolution, it's essential to remember the heart of any thesis: the researcher's unique voice and perspective .

AI tools are here to amplify that voice, not overshadow it. They're guiding you through the vast sea of information, ensuring our research stands out and resonates.

Try these tools out and let us know what worked for you the best.

Love using SciSpace tools? Enjoy discounts! Use SR40 (40% off yearly) and SR20 (20% off monthly). Claim yours here 👉 SciSpace Premium

Frequently Asked Questions

Yes, you can use AI to assist in writing your thesis. AI tools can help streamline various aspects of the writing process, such as data analysis, literature review, grammar checks, and content refinement.

However, it's essential to use AI as a supportive tool and not a replacement for original research and critical thinking. Your thesis should reflect your unique perspective and voice.

Yes, there are AI tools designed to assist in writing research papers. These tools can generate content, suggest improvements, help with formatting, and even provide real-time feedback on grammar and coherence.

Examples include Typeset, JustDone, Writefull, and Texti. However, while they can aid the process, the primary research, analysis, and conclusions should come from the researcher.

The "best" AI for writing papers depends on your specific needs. For content generation and refinement, Texti is a strong contender.

For grammar checks and style suggestions tailored to academic writing, Writefull is highly recommended. JustDone offers a user-friendly interface for content creation. It's advisable to explore different tools and choose one that aligns with your requirements.

To use AI for writing your thesis:

1. Identify the areas where you need assistance, such as literature review, data analysis, content generation, or grammar checks.

2. Choose an AI tool tailored for academic writing, like Typeset, JustDone, Texti, or Writefull.

3. Integrate the tool into your writing process. This could mean using it as a browser extension, a standalone application, or a plugin for your word processor.

4. As you write or review content, use the AI tool for real-time feedback, suggestions, or content generation.

5. Always review and critically assess the suggestions or content provided by the AI to ensure it aligns with your research goals and maintains academic integrity.

master thesis ai

You might also like

What is a thesis | A Complete Guide with Examples

What is a thesis | A Complete Guide with Examples

Madalsa

The 11 best AI tools for academic writing

Photo of Master Academia

By leveraging the power of the right AI tool, you can significantly improve the clarity, efficiency, and overall quality of your academic writing. In this guide, we reviewed and ranked 11 popular AI tools for academic writing , along with our top 3 choices, so that you can pick the best one.

Disclosure: This post contains affiliate links, which means I may earn a small commission if you make a purchase using the links below at  no additional cost to you.

What are the best AI tools for academic writing?

  • 3. QuillBot

4. Writefull

5. grammarly, 6. wordtune, 7. paperpal, 8. sourcely, 10. writesonic, 11. textcortex, summary and top picks.

With the rise of AI tools, academic writing is undergoing a remarkable transformation. The emergence of new AI-powered tools has revolutionized the way researchers, scholars, and students approach their writing tasks.

However, not all tools are created equal! And with the influx of options, it’s important for academics to discern between the high-quality ones and the mediocre ones that can hinder efficiency rather than enhance it.

High-quality AI tools for academic writing help you:

  • correct grammar and spelling mistakes,
  • paraphrase,
  • incorporate references,
  • and much more.

Having to use multiple tools for different purposes can be frustrating. Therefore, comprehensive testing was conducted on AI tools to assess their all-encompassing capabilities.

Furthermore, the optional functions were compared to their respective prices to ensure a fair pricing structure. AI support for academic writing should be affordable and not strain your budget.

Here are Master Academia’s top picks for the best AI tools for academic writing in 2023:

master thesis ai

Best Overall for Academic Writing ($6.67/month)

master thesis ai

Trinka is a unique AI-powered writing tool designed specifically for academic and technical writing.

What sets Trinka apart is its ability to go beyond basic grammar and spelling corrections. It assists writers in finding the appropriate tone and style for academic writing, while also improving conciseness and implementing formal syntax.

Trinka takes into account the specific research subjects, ensuring that the writing style, word choice, and tone align with disciplinary standards and scientific conventions.

In addition to these advanced writing enhancements, Trinka offers a range of additional features. It includes consistency checking to maintain a coherent writing style, publication readiness checks to prepare your work for submission, plagiarism checking to ensure originality, and a citation analyzer to assess the quality and relevance of your citations.

By providing these comprehensive tools, Trinka offers a convenient and all-encompassing solution for taking your academic writing to the next level.

Key Features:

  • Robust grammar and spell-checker – Real-time writing suggestions that also cover tone and style enhancement, syntax, and technical spelling make you a proficient academic writer.
  • Disciplinary and scientific conventions – Trinka provides specialized adjustments of language, style, and tone to adhere to scientific conventions in various research fields, based on existing academic publications.
  • Powerful plagiarism checker – Through the inclusion of a powerful plagiarism checker powered by iThenticate and Turnitin (renowned software for plagiarism detection), you do not have to worry about accidental plagiarism.
  • Wide range of additional features – Trinka offers extra features such as a citation analyzer, journal finder, and publication readiness checker, ensuring your academic writing is prepared for publication efficiently.
  • Customization – Trinka has a personal dictionary feature, allowing you to customize the spellchecker to suit your own research work, facilitating a seamless editing process.
  • Plug-ins – Plug-ins are available for your favorite browser, and work on Microsoft Word, Google Docs, Gmail, Evernote, Notion, and more.
  • For the Trinka Citation Checker and Plagiarism Check, you need to upload your file separately.

master thesis ai

You can use the basic version of Trinka for free, which includes access to all features but with a monthly word limit of 5000 words. The pricing for Trinka’s premium plan starts at $6.67 per month with annual billing, which is extremely affordable.

Best for Summarizing ($15.99/month)

master thesis ai

Genei has established itself as a prominent player in the realm of academic AI tools, and rightfully so.

As a comprehensive tool designed for academics, Genei goes beyond assisting with workflow organization and document storage—it also offers a plethora of features tailored specifically for academic writing.

Genei streamlines the academic writing process by utilizing AI-generated summaries and note-taking shortcuts, extracting information from academic articles.

Users can benefit from comprehensive summaries of entire articles or manually highlighted passages, which can be expanded, condensed, rephrased, and summarized with ease using Genei.

Moreover, Genei allows users to seamlessly adapt writing styles and effortlessly incorporate references.

For those heavily reliant on literature reviews in their academic writing, Genei proves to be a gamechanger.

  • Research article summaries – Academic writing often necessitates summarizing existing scientific articles, and Genei excels in simplifying this task with its high-quality AI-generated summaries.
  • Integrated workflow management – With Genei, you have the ability to save, store, and organize your publications and other documents, providing you with a comprehensive solution to manage your entire workflow within the tool.
  • Summarizing notes – When reading and summarizing within Genei, you have the option to utilize the note function, enabling you to highlight specific text passages and gather your thoughts, all of which can be conveniently converted into text format.
  • Control and customization over generated summaries: Genei allows you to provide specific instructions to the AI, such as requesting to “expand,” “rephrase,” or “summarize” a particular section. ‍
  • Academic discount – As an academic, you can receive a 40% discount on your Genei Pro subscription.
  • Genei does not offer the option to customize the style and tone to adhere to specific disciplinary standards.
  • To utilize Genei, it is necessary to access its online interface as the tool does not offer any integrations or plug-ins with other platforms.

master thesis ai

Genei offers two pricing structures, one for professionals and another for academics.

Professionals:

  • The basic version costs £9.99 per month, providing unlimited projects and resources but excluding GPT3 summaries and AI-powered expand, paraphrase & rephrase functions, with a maximum individual file upload of 5GB. The professional pro version, priced at £29.99/month, offers unlimited file upload and full functionality. Annual discounts are available.
  • For academics, the basic version costs £4.99, while the pro version costs £19.99, which is essential for accessing the summaries and paraphrasing functions. With the annual discount, the pro version costs £15.99 per month.

3. Quil lBot

Best for Paraphrasing ($8.33/month)

master thesis ai

QuillBot is an AI-powered paraphrase tool that helps you to rewrite, edit, and adjust the tone of your text for increased clarity.

With QuillBot ‘s all-in-one Co-Writer, you can access paraphrasing, summarizing, citation creation, and essay writing tools in a single location.

QuillBot’s online paraphraser allows you to modify the meaning of any text using a variety of options. It offers two free modes and five premium modes, allowing you to control the level of vocabulary change.

A synonym slider enables you to adjust the amount of rewriting, in addition to a built-in thesaurus for customizing your paraphrases.

In simple terms, QuillBot’s AI will collaborate with you to generate effective rephrasing. You have a lot of control as you can compare outputs from all seven available modes to choose the most suitable paraphrase.

QuillBot integrates seamlessly with Chrome and Microsoft Word, eliminating the need to switch windows when rephrasing sentences, paragraphs, or articles.

  • Paraphrasing options – QuillBot allows you to choose from seven different paraphrasing options (standard, fluency, formal, simple, creative, expand, shorten) to adjust your paraphrasing to your needs.
  • Built-in thesaurus – You can customize paraphrases with synonyms using the built-in thesaurus, which is extremely handy.
  • Track changes – You can view word count and percent change to feel confident about your revisions when paraphrasing.
  • All-in-one – Access all of QuillBot’s tools in one writing space, including paraphrasing, summarizing, access to its citation generator, and its plagiarism checker.
  • Translation option – Translate text into 30+ languages.
  • Seamless integration – It is easy to incorporate QuillBot into your existing writing tools via Word and Chrome extensions.
  • Pause subscription – Academics and students can pause their subscription to align with their academic writing periods.
  • QuillBot does not offer the option to customize the style and tone to adhere to specific disciplinary standards.
  • QuillBot has no built-in note-taking option.

master thesis ai

The free plan of QuillBot allows paraphrasing of up to 125 words and summarizing of up to 1200 words at a time, but excludes advanced features like advanced grammar rewrites, comparing paraphrasing options, and the plagiarism checker.

With the premium plan, you gain access to full functionality, including unlimited word paraphrasing, summarizing up to 6000 words, faster processing, advanced grammar features, tone detection, and more. The premium plan is priced at $19.95 per month or $8.33 per month when paid annually.

QuillBot also offers a 100% money back guarantee for the QuillBot Premium Plan.

Solid Editing and Content Creation Tool ($5.46/month)

master thesis ai

Writefull utilizes language models trained on extensive journal articles to provide tailored edits for academic writing and offers automatic paraphrasing and text generation.

With additional AI widgets like the Abstract Generator, Academizer, Paraphraser, and Title Generator, it provides inspiration and assistance for academic writers.

Writefull is a powerful editing tool designed for individuals who struggle with writer’s block and prefer to revise and edit existing text rather than creating it from scratch.

Writefull is available for Word and Overleaf, allowing users to revise, upload, and download documents with track changes. This can be particularly useful if a document with track changes is required for a journal submission.

  • Data security – Writefull provides secure and quick text revisions without storing any user data or search history.
  • Track Changes – Users can upload their text for a language check, evaluate overall language quality, and make corrections using Track Changes.
  • AI-generated abstracts and titles: Writefull helps you to write abstracts based on your input, and provides suggestions for titles.
  • Institutional Premium Accounts – Universities can purchase a license which makes Writefull free to their students and staff.
  • GPT detector – Writefull users can utilize a GPT detector feature to determine if a text comes from GPT-3, GPT-4, or ChatGPT models.
  • Writefull’s Academizer makes text is supposed to make texts sound more academic, but it does not adjust to different disciplinary standards.
  • The seven paraphrasing modes are not all suitable for academic writing.
  • While abstracts and titles generated by Writefull ard not be flawless and may require some editing. Nonetheless, they serve as an excellent source of inspiration.

master thesis ai

Writefull can be used with limited functionality for free. Its Premium Plan offers unlimited use of all features at a cost of $15.37 per month.

However, there are significant savings if you choose to pay annually, as it amounts to only $5.46 per month.

Tried and Tested Writing Assistant ($12.00/month)

master thesis ai

Grammarly is widely recognized as the leading AI-powered writing assistance tool. One of Grammarly’s key advantages is its versatility and convenience.

Grammarly stands out among other AI tools by having a widespread and popular institutional license, which universities readily embrace.

Despite the common reservations university administrators hold against AI usage, Grammarly has established itself as a widely accepted and trusted tool among academics, researchers, and students.

Once installed, it seamlessly integrates into various desktop applications and websites, providing suggestions and assistance as you write across different platforms, including apps, social media, documents, messages, and emails, without requiring separate installations.

Grammarly’s popularity in the academic community can be attributed to its support for citation style formatting and robust plagiarism detection, making it a valuable tool for academic writing.

  • Style and tone real-time assistance – Grammarly provides real-time suggestions and guidance on improving the style and tone of your writing.
  • Solid free version – The free version of Grammarly is reliable for basic grammar and spelling checks, as well as identifying unclear sentences and auto-citations.
  • Additional features: A range of advanced features, plagiarism detection, citation checking, and essay analysis, help you to identify unintentional plagiarism and enhance the overall quality of your writing.
  • Special offers for education: Grammarly for Education is available as an institutional license for universities. It ensures high security standards and data protection, which is particularly crucial when dealing with research data. This contributes to Grammarly’s acceptance in academia.
  • Grammarly is not directly targeted at academic writing, which means it may not fully cater to the specific needs and conventions of academic writing styles.
  • While Grammarly’s premium plan provides suggestions to improve the overall tone of your writing, it lacks subdivision according to research fields or disciplines, which may not meet the specific requirements for unique scientific tone required in academic research writing.

master thesis ai

Grammarly’s free plan offers valuable basic writing suggestions to improve your writing.

The premium plan may seem expensive at $30 per month, but with the annual savings of 60%, it becomes much more affordable at $12 per month.

The business account may not be of interest to students or researchers. However, universities can opt for Grammarly for Education, which provides licenses for free premium plans to students and staff.

Efficient Paraphrasing Tool ($9.99/mo)

master thesis ai

Wordtune utilizes sophisticated AI tools and language models that possess a deep understanding of written text, including its context and semantics.

Wordtune goes beyond mere grammar and spelling corrections, empowering you to express your own ideas effectively in writing.

The tool itself proclaims that it has gained the trust of students and researchers at renowned universities.

Although Wordtune excels in paraphrasing, providing synonym recommendations and an integrated plagiarism check for seamless usage, it is important to note that its focus is not primarily on academic writing, which influences the training of the system.

  • Synonyms – Wordtune provides contextual synonym recommendations for your sentences.
  • Grammar and spelling correction – With Wordtune you can rest assured that your text is free from grammar and spelling mistakes.
  • Plagiarism-free writing – Wordtune helps you avoid plagiarism by rephrasing text while preserving its original meaning with its built-in plagiarism checker.
  • Wide range of extensions – Wordtune offers convenient extensions for Chrome, Microsoft Word, iOS, Teams, and more.
  • Affordable – Wordtune provides cost-effective AI-powered paraphrasing capabilities.
  • Wordtune does not have specific features or styles tailored for academic writing.
  • Wordtune primarily focuses on lengthening or shortening text and does not offer extensive tools for academic writing needs.

master thesis ai

Wordtune offers a free version with limited features, while the premium version is priced at $24.99 per month. However, users can benefit from a significant 60% discount when opting for an annual subscription: With an annual subscription, the premium version of Wordtune is available at a reduced rate of $9.99 per month.

Academic Language Editor ($8.25 / month)

master thesis ai

Paperpal, developed by Researcher.life, is a specialized AI tool designed for researchers and academic writers, leveraging the expertise gained from editing numerous manuscripts by professional editors.

With Paperpal , you can effortlessly enhance your writing by addressing grammar errors and improving sentence structure, ensuring your credibility remains intact.

Moreover, Paperpal offers advanced features such as accurate translation and contextual synonyms, along with the choice between Essential and Extensive editing modes, providing flexibility to tailor the editing process to your specific needs.

Available as Paperpal for Word, Web, and Manuscript, this comprehensive tool also checks for structural and technical inconsistencies in your writing.

  • Trained with expertise of academic editors – Paperpal is an AI system that has undergone training on academic writing and human-edited manuscripts, guaranteeing high standards.
  • Translation – With Paperpal, you can effortlessly translate academic texts from over 25 languages to academic English.
  • Compliance with technical language standards – The manuscript checker in Paperpal ensures technical compliance and maintains language quality standards required for journal submissions.
  • Consistency feature – Paperpal’s consistency feature checks for and detects stylistic inconsistencies unique to research content, allowing for seamless correction.
  • Data security – Your data is secure with Paperpal, as it adheres to a certified data security protocol and is compliant with ISO/IEC 27001:2013 standards.
  • Paperpal does not offer a subdivision into research fields or disciplinary standards, meaning it does not cater to specific tones or styles required by different academic disciplines.
  • Currently, Paperpal only provides word integration and is limited to integration with Microsoft Word and web browsers.
  • Paperpal lacks a built-in plagiarism checker.

master thesis ai

The Prime plan offers unlimited language suggestions and is priced at $99, which translates to just $8.25 per month when billed annually. For those who prefer a monthly plan, it is available at an affordable rate of $12 per month.

Smart Reference Tool While Writing ($3.00/mo)

master thesis ai

Sourcely is an AI-powered source-finding tool developed by a team of students which offers an easy-to-use solution for academic writers in search of references.

By analyzing text and identifying key themes, Sourcely searches through a vast data set to locate relevant and reliable sources, providing academic writers with the information needed to support their work.

Good references are crucial in academic writing, as they provide legitimacy to arguments and claims.

Simply input your essay title or text, and Sourcely finds suitable sources to enhance your work.

  • Source discovery – Sourcely provides a unique approach where you can first write your content and then effortlessly discover relevant sources to support your ideas.
  • Summaries – Sourcely offers a convenient feature called “Summarize a Source,” allowing users to obtain a summary of an article or source they are considering for their work.
  • Affordability – Sourcely is highly affordable, making it an accessible option for users.
  • Sourcely’s feature of providing interesting source recommendations is appealing, but it is not comprehensive enough to solely rely on and neglect consulting resources from other reliable sources.
  • Sourcely has limited features compared to other AI writing tools.

master thesis ai

Sourcely offers great affordability with a price of $5.99 per month or $36.99 per year. While it may have fewer features compared to other academic writing tools, its lower price point still makes it a valuable and useful tool for academic writing.

Fast Translating and Rewording Tool ($7.5/mo)

master thesis ai

Rytr is an AI writing assistant that quickly generates high-quality content at an affordable price, primarily targeting marketers, copywriters, and entrepreneurs.

While it is recognized by G2 (business software reviews) as a leading brand in the AI Writing space and claims to be “loved by academicians,” it is important to note that Rytr is not trained on academic articles.

Rytr is a text-generating AI tool. Depending on the purpose, academics can find it useful for selecting from multiple languages and tones of voice, as well as rewording and shortening text.

With the convenience of a browser extension, Rytr saves time and ensures your copy is top-notch especially for emails, social media posts, or blogs.

  • 40+ use cases – Rytr is an AI writing assistant that offers content generation for over 40 use cases, including emails, cover letters, and blog posts, with the ability to both shorten and lengthen content as needed.
  • Generous free plan – While Rytr is not specifically targeting academic writing, it provides a generous free plan that can be beneficial for tasks such as writing emails and blog posts for research dissemination.
  • Translation – Rytr can help you to translate your texts into 30+ languages.
  • Customization – The platform offers a range of options to enhance the writing process, including language selection, tone customization, expanding or rephrasing text, formatting options, and even a readability score feature.
  • Rytr is not suitable for essay or academic writing purposes, as it lacks the necessary features specifically designed for these types of tasks.
  • It is not targeted towards researchers and fails to provide valuable tools like citation assistance, which is essential for academic writing.
  • While Rytr offers a range of features, some of them, such as SEO optimization, are irrelevant and not beneficial for academic writing purposes.

master thesis ai

Rytr offers a free plan that allows users to generate content up to 10,000 characters per month. For more advanced features and increased usage, there is the Saver Plan priced at $9 per month (or $7.5 per month when billed annually).

Alternatively, the Unlimited plan is available at $29 per month or $290 per year. These different pricing tiers cater to the diverse needs of users, ensuring they can find the plan that best suits their requirements.

Paraphrasing and Translation Tool ($12.67/mo)

master thesis ai

While Writesonic is primarily geared towards marketing teams and entrepreneurs, it offers an intriguing feature for academics: the paraphrasing tool. This tool allows users to rephrase content in multiple languages.

With Writesonic ‘s paraphrasing tool, you can effortlessly rewrite sentences, paragraphs, essays, and even entire articles with a simple click.

Produced content is 100% unique and free from plagiarism.

Upon generating a paragraph, Writesonic provides three different versions for you to choose from. It allows you to select the best option or make edits and revisions using the various variations.

  • Choice – Writesonic provides three paraphrased options for each paraphrase, ensuring you find the most suitable and impactful version for your content.
  • Switching from passive to active voice – Transform your writing by switching from passive voice to active voice. Active voice sentences provide clarity, conciseness, and impact, ensuring you don’t miss out on great opportunities. The rewording tool allows you to rephrase paragraphs and change the voice of your sentences effortlessly.
  • Paraphrase your content in different languages – Writesonic’s Paraphrase tool can be used to conduct AI paragraph rephrasing in up to 26 different languages.
  • Writesonic is not specifically designed for academic writing, and its features are not tailored to meet the specific requirements of academic writing.
  • The platform lacks an academic writing style, which is essential for maintaining scholarly integrity and adhering to academic conventions.
  • While Writesonic offers various features, some of them, such as SEO optimization, are not directly applicable or relevant to academic writing tasks.

master thesis ai

You can start with a free trial of Writesonic to experience its features. If you decide to upgrade to the Pro version, it is available at a cost of $12.67 per month.

Summarizing and Paraphrasing Tool ($19.99/mo)

master thesis ai

With TextCortex you can say goodbye to any worries about wording and spelling mistakes. Furthermore, it can help you to speed up your reading process.

TextCortex is an AI tool which can condense long texts into concise summaries, capturing the essential points.

Moreover, it can enhance your fluency and adapting vocabulary, tone, and style to match any situation.

  • Paraphrasing – TextCortex offers a powerful paraphrasing tool to help you rephrase and enhance your text.
  • Translations – TextCortex’s translation feature allows you to effortlessly write in over 25 languages including French, German, Spanish, Swedish, and more.
  • TextCortex is not specifically designed for academic writing, catering to a broader audience instead.
  • It may not be cost-effective for academics due to its high price relative to the limited functionality it offers for academic writing purposes.

master thesis ai

With the free version of TextCortex, you have the ability to create up to 10 pieces per day. For enhanced features and unlimited usage, the Pro version is available at a price of $19.99.

The landscape of AI writing tools is continuously evolving, witnessing the introduction of new tools regularly. However, not all these tools are equally suitable for academic writing, as their effectiveness depends on your specific goals and requirements.

While some tools, although not specifically designed for academic writing, can still provide valuable assistance in certain areas, there are standout options that are solely dedicated to enhancing academic writing.

Keeping this in mind, our top picks for academic writing support are the following AI tools:

Get new content delivered directly to your inbox!

Subscribe and receive Master Academia's quarterly newsletter.

How to benefit from ChatGPT as an academic

38 common academic job interview questions (+ powerful answers), related articles.

master thesis ai

How to address data privacy and confidentiality concerns of AI in research

Featured blog post image for How to write a good research proposal

How to write a good research proposal (in 9 steps)

master thesis ai

Why and how to conduct a systematic literature review

Featured blog post image for Journal vs conference papers - Key differences & advice

Journal vs conference papers: Key differences & advice

Topics for Master Theses at the Chair for Artificial Intelligence

Smart city / smart mobility.

  • Traffic Forecasting with Graph Attention Networks
  • Learning Traffic Simulation Parameters with Reinforcement Learning
  • Extending the Mannheim Mobility Model with Individual Bike Traffic

AI for Business Process Management

  • Applications of deep neural networks in Online Conformance Checking
  • Accurate Business Process Simulation (BPS) models based on deep learning
  • How to tackle concept drift in Predictive Process Monitoring (PPM)

Explainable and Fair Machine Learning

  • Extracting Causal Models from Module Handbooks for Explainable Student Success Prediction
  • Investigating Different Techniques to Improve Fairness for Tabular Data
  • Data-induced Bias in Social Simulations
  • Learing Causal Models from Tabular Data

Human Activity and Goal Recognition

  • Reinforcement Learning for Goal Recognition
  • Investigating the Difficulty of Goal Recognition Problems
  • Enhancing Audio-Based Activity Recognition through Autoencoder Encoded Representations
  • Activity Recognition from Audio Data in a Kitchen Scenario
  • Speaker Diarization and Identification in a Meeting Scenario

Machine Learning for Supply Chain Optimization

  • Time Series Analysis & Forecasting of Events (Sales, Demand, etc.)
  • Integrated vs. separated optimization: theory and practice
  • Leveraging deep learning to build a versatile end-to-end inventory management model
  • Reinforcement learning for the vehicle routing problem
  • Metaheuristics in SCM: Overview and benchmark study
  • Finetuning parametrized inventory management system

Anomaly Detection on Server Logs

  • Analyse real-life server logs stored in an existing opensearch library (Graylog)
  • Learning values describing normal behavior of servers and detect anomalies in logged messages
  • Implement simple alert system (existing systems like Icinga can be used)
  • Prepare results in a (Web-)Gui
  • Creating eLearning Recommender Systems using NLP
  • Hyperparameter Optimization for Symbolic Knowledge Graph Completion
  • Applying Symbolic Knowledge Graph Completion to Inductive Link Prediction
  • Data Augmentation via Generative Adversarial Networks (GANs)
  • Autoencoders for Sparse, Irregularly Spaced Time Series Sequences

Tracking cookies are currently allowed.

Tracking cookies are currently not allowed.

Artificial Intelligence

Completed Theses

State space search solves navigation tasks and many other real world problems. Heuristic search, especially greedy best-first search, is one of the most successful algorithms for state space search. We improve the state of the art in heuristic search in three directions.

In Part I, we present methods to train neural networks as powerful heuristics for a given state space. We present a universal approach to generate training data using random walks from a (partial) state. We demonstrate that our heuristics trained for a specific task are often better than heuristics trained for a whole domain. We show that the performance of all trained heuristics is highly complementary. There is no clear pattern, which trained heuristic to prefer for a specific task. In general, model-based planners still outperform planners with trained heuristics. But our approaches exceed the model-based algorithms in the Storage domain. To our knowledge, only once before in the Spanner domain, a learning-based planner exceeded the state-of-the-art model-based planners.

A priori, it is unknown whether a heuristic, or in the more general case a planner, performs well on a task. Hence, we trained online portfolios to select the best planner for a task. Today, all online portfolios are based on handcrafted features. In Part II, we present new online portfolios based on neural networks, which receive the complete task as input, and not just a few handcrafted features. Additionally, our portfolios can reconsider their choices. Both extensions greatly improve the state-of-the-art of online portfolios. Finally, we show that explainable machine learning techniques, as the alternative to neural networks, are also good online portfolios. Additionally, we present methods to improve our trust in their predictions.

Even if we select the best search algorithm, we cannot solve some tasks in reasonable time. We can speed up the search if we know how it behaves in the future. In Part III, we inspect the behavior of greedy best-first search with a fixed heuristic on simple tasks of a domain to learn its behavior for any task of the same domain. Once greedy best-first search expanded a progress state, it expands only states with lower heuristic values. We learn to identify progress states and present two methods to exploit this knowledge. Building upon this, we extract the bench transition system of a task and generalize it in such a way that we can apply it to any task of the same domain. We can use this generalized bench transition system to split a task into a sequence of simpler searches.

In all three research directions, we contribute new approaches and insights to the state of the art, and we indicate interesting topics for future work.

Greedy best-first search (GBFS) is a sibling of A* in the family of best-first state-space search algorithms. While A* is guaranteed to find optimal solutions of search problems, GBFS does not provide any guarantees but typically finds satisficing solutions more quickly than A*. A classical result of optimal best-first search shows that A* with admissible and consistent heuristic expands every state whose f-value is below the optimal solution cost and no state whose f-value is above the optimal solution cost. Theoretical results of this kind are useful for the analysis of heuristics in different search domains and for the improvement of algorithms. For satisficing algorithms a similarly clear understanding is currently lacking. We examine the search behavior of GBFS in order to make progress towards such an understanding.

We introduce the concept of high-water mark benches, which separate the search space into areas that are searched by GBFS in sequence. High-water mark benches allow us to exactly determine the set of states that GBFS expands under at least one tie-breaking strategy. We show that benches contain craters. Once GBFS enters a crater, it has to expand every state in the crater before being able to escape.

Benches and craters allow us to characterize the best-case and worst-case behavior of GBFS in given search instances. We show that computing the best-case or worst-case behavior of GBFS is NP-complete in general but can be computed in polynomial time for undirected state spaces.

We present algorithms for extracting the set of states that GBFS potentially expands and for computing the best-case and worst-case behavior. We use the algorithms to analyze GBFS on benchmark tasks from planning competitions under a state-of-the-art heuristic. Experimental results reveal interesting characteristics of the heuristic on the given tasks and demonstrate the importance of tie-breaking in GBFS.

Classical planning tackles the problem of finding a sequence of actions that leads from an initial state to a goal. Over the last decades, planning systems have become significantly better at answering the question whether such a sequence exists by applying a variety of techniques which have become more and more complex. As a result, it has become nearly impossible to formally analyze whether a planning system is actually correct in its answers, and we need to rely on experimental evidence.

One way to increase trust is the concept of certifying algorithms, which provide a witness which justifies their answer and can be verified independently. When a planning system finds a solution to a problem, the solution itself is a witness, and we can verify it by simply applying it. But what if the planning system claims the task is unsolvable? So far there was no principled way of verifying this claim.

This thesis contributes two approaches to create witnesses for unsolvable planning tasks. Inductive certificates are based on the idea of invariants. They argue that the initial state is part of a set of states that we cannot leave and that contains no goal state. In our second approach, we define a proof system that proves in an incremental fashion that certain states cannot be part of a solution until it has proven that either the initial state or all goal states are such states.

Both approaches are complete in the sense that a witness exists for every unsolvable planning task, and can be verified efficiently (in respect to the size of the witness) by an independent verifier if certain criteria are met. To show their applicability to state-of-the-art planning techniques, we provide an extensive overview how these approaches can cover several search algorithms, heuristics and other techniques. Finally, we show with an experimental study that generating and verifying these explanations is not only theoretically possible but also practically feasible, thus making a first step towards fully certifying planning systems.

Heuristic search with an admissible heuristic is one of the most prominent approaches to solving classical planning tasks optimally. In the first part of this thesis, we introduce a new family of admissible heuristics for classical planning, based on Cartesian abstractions, which we derive by counterexample-guided abstraction refinement. Since one abstraction usually is not informative enough for challenging planning tasks, we present several ways of creating diverse abstractions. To combine them admissibly, we introduce a new cost partitioning algorithm, which we call saturated cost partitioning. It considers the heuristics sequentially and uses the minimum amount of costs that preserves all heuristic estimates for the current heuristic before passing the remaining costs to subsequent heuristics until all heuristics have been served this way.

In the second part, we show that saturated cost partitioning is strongly influenced by the order in which it considers the heuristics. To find good orders, we present a greedy algorithm for creating an initial order and a hill-climbing search for optimizing a given order. Both algorithms make the resulting heuristics significantly more accurate. However, we obtain the strongest heuristics by maximizing over saturated cost partitioning heuristics computed for multiple orders, especially if we actively search for diverse orders.

The third part provides a theoretical and experimental comparison of saturated cost partitioning and other cost partitioning algorithms. Theoretically, we show that saturated cost partitioning dominates greedy zero-one cost partitioning. The difference between the two algorithms is that saturated cost partitioning opportunistically reuses unconsumed costs for subsequent heuristics. By applying this idea to uniform cost partitioning we obtain an opportunistic variant that dominates the original. We also prove that the maximum over suitable greedy zero-one cost partitioning heuristics dominates the canonical heuristic and show several non-dominance results for cost partitioning algorithms. The experimental analysis shows that saturated cost partitioning is the cost partitioning algorithm of choice in all evaluated settings and it even outperforms the previous state of the art in optimal classical planning.

Classical planning is the problem of finding a sequence of deterministic actions in a state space that lead from an initial state to a state satisfying some goal condition. The dominant approach to optimally solve planning tasks is heuristic search, in particular A* search combined with an admissible heuristic. While there exist many different admissible heuristics, we focus on abstraction heuristics in this thesis, and in particular, on the well-established merge-and-shrink heuristics.

Our main theoretical contribution is to provide a comprehensive description of the merge-and-shrink framework in terms of transformations of transition systems. Unlike previous accounts, our description is fully compositional, i.e. can be understood by understanding each transformation in isolation. In particular, in addition to the name-giving merge and shrink transformations, we also describe pruning and label reduction as such transformations. The latter is based on generalized label reduction, a new theory that removes all of the restrictions of the previous definition of label reduction. We study the four types of transformations in terms of desirable formal properties and explain how these properties transfer to heuristics being admissible and consistent or even perfect. We also describe an optimized implementation of the merge-and-shrink framework that substantially improves the efficiency compared to previous implementations.

Furthermore, we investigate the expressive power of merge-and-shrink abstractions by analyzing factored mappings, the data structure they use for representing functions. In particular, we show that there exist certain families of functions that can be compactly represented by so-called non-linear factored mappings but not by linear ones.

On the practical side, we contribute several non-linear merge strategies to the merge-and-shrink toolbox. In particular, we adapt a merge strategy from model checking to planning, provide a framework to enhance existing merge strategies based on symmetries, devise a simple score-based merge strategy that minimizes the maximum size of transition systems of the merge-and-shrink computation, and describe another framework to enhance merge strategies based on an analysis of causal dependencies of the planning task.

In a large experimental study, we show the evolution of the performance of merge-and-shrink heuristics on planning benchmarks. Starting with the state of the art before the contributions of this thesis, we subsequently evaluate all of our techniques and show that state-of-the-art non-linear merge-and-shrink heuristics improve significantly over the previous state of the art.

Admissible heuristics are the main ingredient when solving classical planning tasks optimally with heuristic search. Higher admissible heuristic values are more accurate, so combining them in a way that dominates their maximum and remains admissible is an important problem.

The thesis makes three contributions in this area. Extensions to cost partitioning (a well-known heuristic combination framework) allow to produce higher estimates from the same set of heuristics. The new heuristic family called operator-counting heuristics unifies many existing heuristics and offers a new way to combine them. Another new family of heuristics called potential heuristics allows to cast the problem of finding a good heuristic as an optimization problem.

Both operator-counting and potential heuristics are closely related to cost partitioning. They offer a new look on cost partitioned heuristics and already sparked research beyond their use as classical planning heuristics.

Master's theses

Classical planning tasks are typically formulated in PDDL. Some of them can be described more concisely using derived variables. Contrary to basic variables, their values cannot be changed by operators and are instead determined by axioms which specify conditions under which they take a certain value. Planning systems often support axioms in their search component, but their heuristics’ support is limited or nonexistent. This leads to decreased search performance with tasks that use axioms. We compile axioms away using our implementation of a known algorithm in the Fast Downward planner. Our results show that the compilation has a negative impact on search performance with its only benefit being the ability to use heuristics that have no axiom support. As a compromise between performance and expressivity, we identify axioms of a simple form and devise a compilation for them. We compile away all axioms in several of the tested domains without a decline in search performance.

The International Planning Competitions (IPCs) serve as a testing suite for planning sys- tems. These domains are well-motivated as they are derived from, or possess characteristics analogous to real-life applications. In this thesis, we study the computational complexity of the plan existence and bounded plan existence decision problems of the following grid- based IPC domains: VisitAll, TERMES, Tidybot, Floortile, and Nurikabe. In all of these domains, there are one or more agents moving through a rectangular grid (potentially with obstacles) performing actions along the way. In many cases, we engineer instances that can be solved only if the movement of the agent or agents follows a Hamiltonian path or cycle in a grid graph. This gives rise to many NP-hardness reductions from Hamiltonian path/cycle problems on grid graphs. In the case of VisitAll and Floortile, we give necessary and suffi- cient conditions for deciding the plan existence problem in polynomial time. We also show that Tidybot has the game Push -1F as a special case, and its plan existence problem is thus PSPACE-complete. The hardness proofs in this thesis highlight hard instances of these domains. Moreover, by assigning a complexity class to each domain, researchers and practitioners can better assess the strengths and limitations of new and existing algorithms in these domains.

Planning tasks can be used to describe many real world problems of interest. Solving those tasks optimally is thus an avenue of great interest. One established and successful approach for optimal planning is the merge-and-shrink framework, which decomposes the task into a factored transition system. The factors initially represent the behaviour of one state variable and are repeatedly combined and abstracted. The solutions of these abstract states is then used as a heuristic to guide search in the original planning task. Existing merge-and-shrink transformations keep the factored transition system orthogonal, meaning that the variables of the planning task are represented in no more than one factor at any point. In this thesis we introduce the clone transformation, which duplicates a factor of the factored transition system, making it non-orthogonal. We test two classes of clone strategies, which we introduce and implement in the Fast Downward planning system and conclude that, while theoretically promising, our clone strategies are practically inefficient as their performance was worse than state-of-the-art methods for merge-and-shrink.

This thesis aims to present a novel approach for improving the performance of classical planning algorithms by integrating cost partitioning with merge-and-shrink techniques. Cost partitioning is a well-known technique for admissibly adding multiple heuristic values. Merge-and-shrink, on the other hand, is a technique to generate well-informed abstractions. The "merge” part of the technique is based on creating an abstract representation of the original problem by replacing two transition systems with their synchronised product. In contrast, the ”shrink” part refers to reducing the size of the factor. By combining these two approaches, we aim to leverage the strengths of both methods to achieve better scalability and efficiency in solving classical planning problems. Considering a range of benchmark domains and the Fast Downward planning system, the experimental results show that the proposed method achieves the goal of fusing merge and shrink with cost partitioning towards better outcomes in classical planning.

Planning is the process of finding a path in a planning task from the initial state to a goal state. Multiple algorithms have been implemented to solve such planning tasks, one of them being the Property-Directed Reachability algorithm. Property-Directed Reachability utilizes a series of propositional formulas called layers to represent a super-set of states with a goal distance of at most the layer index. The algorithm iteratively improves the layers such that they represent a minimum number of states. This happens by strengthening the layer formulas and therefore excluding states with a goal distance higher than the layer index. The goal of this thesis is to implement a pre-processing step to seed the layers with a formula that already excludes as many states as possible, to potentially improve the run-time performance. We use the pattern database heuristic and its associated pattern generators to make use of the planning task structure for the seeding algorithm. We found that seeding does not consistently improve the performance of the Property-Directed Reachability algorithm. Although we observed a significant reduction in planning time for some tasks, it significantly increased for others.

Certifying algorithms is a concept developed to increase trust by demanding affirmation of the computed result in form of a certificate. By inspecting the certificate, it is possible to determine correctness of the produced output. Modern planning systems have been certifying for long time in the case of solvable instances, where a generated plan acts as a certificate.

Only recently there have been the first steps towards certifying unsolvability judgments in the form of inductive certificates which represent certain sets of states. Inductive certificates are expressed with the help of propositional formulas in a specific formalism.

In this thesis, we investigate the use of propositional formulas in conjunctive normal form (CNF) as a formalism for inductive certificates. At first, we look into an approach that allows us to construct formulas representing inductive certificates in CNF. To show general applicability of this approach, we extend this to the family of delete relaxation heuristics. Furthermore, we present how a planning system is able to generate an inductive validation formula, a single formula that can be used to validate if the set found by the planner is indeed an inductive certificate. At last, we show with an experimental evaluation that the CNF formalism can be feasible in practice for the generation and validation of inductive validation formulas.

In generalized planning the aim is to solve whole classes of planning tasks instead of single tasks one at a time. Generalized representations provide information or knowledge about such classes to help solving them. This work compares the expressiveness of three generalized representations, generalized potential heuristics, policy sketches and action schema networks, in terms of compilability. We use a notion of equivalence that requires two generalized representations to decompose the tasks of a class into the same subtasks. We present compilations between pairs of equivalent generalized representations and proofs where a compilation is impossible.

A Digital Microfluidic Biochip (DMFB) is a digitally controllable lab-on-a-chip. Droplets of fluids are moved, merged and mixed on a grid. Routing these droplets efficiently has been tackled by various different approaches. We try to use temporal planning to do droplet routing, inspired by the use of it in quantum circuit compilation. We test a model for droplet routing in both classical and temporal planning and compare both versions. We show that our classical planning model is an efficient method to find droplet routes on DMFBs. Then we extend our model and include spawning, disposing, merging, splitting and mixing of droplets. The results of these extensions show that we are able to find plans for simple experiments. When scaling the problem size to real life experiments our model fails to find plans.

Cost partitioning is a technique used to calculate heuristics in classical optimal planning. It involves solving a linear program. This linear program can be decomposed into a master and pricing problems. In this thesis we combine Fourier-Motzkin elimination and the double description method in different ways to precompute the generating rays of the pricing problems. We further empirically evaluate these approaches and propose a new method that replaces the Fourier-Motzkin elimination. Our new method improves the performance of our approaches with respect to runtime and peak memory usage.

The increasing number of data nowadays has contributed to new scheduling approaches. Aviation is one of the domains concerned the most, as the aircraft engine implies millions of maintenance events operated by staff worldwide. In this thesis we present a constraint programming-based algorithm to solve the aircraft maintenance scheduling problem. We want to find the best time to do the maintenance by determining which employee will perform the work and when. Here we report how the scheduling process in aviation can be automatized.

To solve stochastic state-space tasks, the research field of artificial intelligence is mainly used. PROST2014 is state of the art when determining good actions in an MDP environment. In this thesis, we aimed to provide a heuristic by using neural networks to outperform the dominating planning system PROST2014. For this purpose, we introduced two variants of neural networks that allow to estimate the respective Q-value for a pair of state and action. Since we envisaged the learning method of supervised learning, in addition to the architecture as well as the components of the neural networks, the generation of training data was also one of the main tasks. To determine the most suitable network parameters, we performed a sequential parameter search, from which we expected a local optimum of the model settings. In the end, the PROST2014 planning system could not be surpassed in the total rating evaluation. Nevertheless, in individual domains, we could establish increased final scores on the side of the neural networks. The result shows the potential of this approach and points to eventual adaptations in future work pursuing this procedure furthermore.

In classical planning, there are tasks that are hard and tasks that are easy. We can measure the complexity of a task with the correlation complexity, the improvability width, and the novelty width. In this work, we compare these measures.

We investigate what causes a correlation complexity of at least 2. To do so we translate the state space into a vector space which allows us to make use of linear algebra and convex cones.

Additionally, we introduce the Basel measure, a new measure that is based on potential heuristics and therefore similar to the correlation complexity but also comparable to the novelty width. We show that the Basel measure is a lower bound for the correlation complexity and that the novelty width +1 is an upper bound for the Basel measure.

Furthermore, we compute the Basel measure for some tasks of the International Planning Competitions and show that the translation of a task can increase the Basel measure by removing seemingly irrelevant state variables.

Unsolvability is an important result in classical planning and has seen increased interest in recent years. This thesis explores unsolvability detection by automatically generating parity arguments, a well-known way of proving unsolvability. The argument requires an invariant measure, whose parity remains constant across all reachable states, while all goal states are of the opposite parity. We express parity arguments using potential functions in the field F 2 . We develop a set of constraints that describes potential functions with the necessary separating property, and show that the constraints can be represented efficiently for up to two-dimensional features. Enhanced with mutex information, an algorithm is formed that tests whether a parity function exists for a given planning task. The existence of such a function proves the task unsolvable. To determine its practical use, we empirically evaluate our approach on a benchmark of unsolvable problems and compare its performance to a state of the art unsolvability planner. We lastly analyze the arguments found by our algorithm to confirm their validity, and understand their expressive power.

We implemented the invariant synthesis algorithm proposed by Rintanen and experimentally compared it against Helmert’s mutex group synthesis algorithm as implemented in Fast Downward.

The context for the comparison is the translation of propositional STRIPS tasks to FDR tasks, which requires the identification of mutex groups.

Because of its dominating lead in translation speed, combined with few and marginal advantages in performance during search, Helmert’s algorithm is clearly better for most uses. Meanwhile Rintanen’s algorithm is capable of finding invariants other than mutexes, which Helmert’s algorithm per design cannot do.

The International Planning Competition (IPC) is a competition of state-of-the-art planning systems. The evaluation of these planning systems is done by measuring them with different problems. It focuses on the challenges of AI planning by analyzing classical, probabilistic and temporal planning and by presenting new problems for future research. Some of the probabilistic domains introduced in IPC 2018 are Academic Advising, Chromatic Dice, Cooperative Recon, Manufacturer, Push Your Luck, Red-finned Blue-eyes, etc.

This thesis aims to solve (near)-optimally two probabilistic IPC 2018 domains, Academic Advising and Chromatic Dice. We use different techniques to solve these two domains. In Academic Advising, we use a relevance analysis to remove irrelevant actions and state variables from the planning task. We then convert the problem from probabilistic to classical planning, which helped us solve it efficiently. In Chromatic Dice, we implement backtracking search to solve the smaller instances optimally. More complex instances are partitioned into several smaller planning tasks, and a near-optimal policy is derived as a combination of the optimal solutions to the small instances.

The motivation for finding (near)-optimal policies is related to the IPC score, which measures the quality of the planners. By providing the optimal upper bound of the domains, we contribute to the stabilization of the IPC score evaluation metric for these domains.

Most well-known and traditional online planners for probabilistic planning are in some way based on Monte-Carlo Tree Search. SOGBOFA, symbolic online gradient-based optimization for factored action MDPs, offers a new perspective on this: it constructs a function graph encoding the expected reward for a given input state using independence assumptions for states and actions. On this function, they use gradient ascent to perform a symbolic search optimizing the actions for the current state. This unique approach to probabilistic planning has shown very strong results and even more potential. In this thesis, we attempt to integrate the new ideas SOGBOFA presents into the traditionally successful Trial-based Heuristic Tree Search framework. Specifically, we design and evaluate two heuristics based on the aforementioned graph and its Q value estimations, but also the search using gradient ascent. We implement and evaluate these heuristics in the Prost planner, along with a version of the current standalone planner.

In this thesis, we consider cyclical dependencies between landmarks for cost-optimal planning. Landmarks denote properties that must hold at least once in all plans. However, if the orderings between them induce cyclical dependencies, one of the landmarks in each cycle must be achieved an additional time. We propose the generalized cycle-covering heuristic which considers this in addition to the cost for achieving all landmarks once.

Our research is motivated by recent applications of cycle-covering in the Freecell and logistics domain where it yields near-optimal results. We carry it over to domain-independent planning using a linear programming approach. The relaxed version of a minimum hitting set problem for the landmarks is enhanced by constraints concerned with cyclical dependencies between them. In theory, this approach surpasses a heuristic that only considers landmarks.

We apply the cycle-covering heuristic in practice where its theoretical dominance is confirmed; Many planning tasks contain cyclical dependencies and considering them affects the heuristic estimates favorably. However, the number of tasks solved using the improved heuristic is virtually unaffected. We still believe that considering this feature of landmarks offers great potential for future work.

Potential heuristics are a class of heuristics used in classical planning to guide a search algorithm towards a goal state. Most of the existing research on potential heuristics is focused on finding heuristics that are admissible, such that they can be used by an algorithm such as A* to arrive at an optimal solution. In this thesis, we focus on the computation of potential heuristics for satisficing planning, where plan optimality is not required and the objective is to find any solution. Specifically, our focus is on the computation of potential heuristics that are descending and dead-end avoiding (DDA), since these prop- erties guarantee favorable search behavior when used with greedy search algorithms such as hillclimbing. We formally prove that the computation of DDA heuristics is a PSPACE-complete problem and propose several approximation algorithms. Our evaluation shows that the resulting heuristics are competitive with established approaches such as Pattern Databases in terms of heuristic quality but suffer from several performance bottlenecks.

Most automated planners use heuristic search to solve the tasks. Usually, the planners get as input a lifted representation of the task in PDDL, a compact formalism describing the task using a fragment of first-order logic. The planners then transform this task description into a grounded representation where the task is described in propositional logic. This new grounded format can be exponentially larger than the lifted one, but many planners use this grounded representation because it is easier to implement and reason about.

However, sometimes this transformation between lifted and grounded representations is not tractable. When this is the case, there is not much that planners based on heuristic search can do. Since this transformation is a required preprocess, when this fails, the whole planner fails.

To solve the grounding problem, we introduce new methods to deal with tasks that cannot be grounded. Our work aims to find good ways to perform heuristic search while using a lifted representation of planning problems. We use the point-of-view of planning as a database progression problem and borrow solutions from the areas of relational algebra and database theory.

Our theoretical and empirical results are motivating: several instances that were never solved by any planner in the literature are now solved by our new lifted planner. For example, our planner can solve the challenging Organic Synthesis domain using a breadth-first search, while state-of-the-art planners cannot solve more than 60% of the instances. Furthermore, our results offer a new perspective and a deep theoretical study of lifted representations for planning tasks.

The generation of independently verifiable proofs for the unsolvability of planning tasks using different heuristics, including linear Merge-and-Shrink heuristics, is possible by usage of a proof system framework. Proof generation in the case of non-linear Merge-and-Shrink heuristic, however, is currently not supported. This is due to the lack of a suitable state set representation formalism that allows to compactly represent states mapped to a certain value in the belonging Merge-and-Shrink representation (MSR). In this thesis, we overcome this shortcoming using Sentential Decision Diagrams (SDDs) as set representations. We describe an algorithm that constructs the desired SDD from the MSR, and show that efficient proof verification is possible with SDDs as representation formalism. Aditionally, we use a proof of concept implementation to analyze the overhead occurred by the proof generation functionality and the runtime of the proof verification.

The operator-counting framework is a framework in classical planning for heuristics that are based on linear programming. The operator-counting framework covers several kinds of state-of-the-art linear programming heuristics, among them the post-hoc optimization heuristic. In this thesis we will use post-hoc optimization constraints and evaluate them under altered cost functions instead of the original cost function of the planning task. We show that such cost-altered post-hoc optimization constraints are also covered by the operator-counting framework and that it is possible to achieve improved heuristic estimates with them, compared with post-hoc optimization constraints under the original cost function. In our experiments we have not been able to achieve improved problem coverage, as we were not able to find a method for generating favorable cost functions that work well in all domains.

Heuristic forward search is the state-of-the-art approach to solve classical planning problems. On the other hand, bidirectional heuristic search has a lot of potential but was never able to deliver on those expectations in practice. Only recently the near-optimal bidirectional search algorithm (NBS) was introduces by Chen et al. and as the name suggests, NBS expands nearly the optimal number of states to solve any search problem. This is a novel achievement and makes the NBS algorithm a very promising and efficient algorithm in search. With this premise in mind, we raise the question of how applicable NBS is to planning. In this thesis, we inquire this very question by implementing NBS in the state- of-the-art planner Fast-Downward and analyse its performance on the benchmark of the latest international planning competition. We additionally implement fractional meet-in- the-middle and computeWVC to analyse NBS’ performance more thoroughly in regards to the structure of the problem task.

The conducted experiments show that NBS can successfully be applied to planning as it was able to consistently outperform A*. Especially good results were achieved on the domains: blocks, driverlog, floortile-opt11-strips, get-opt14-strips, logistics00, and termes- opt18-strips. Analysing these results, we deduce that the efficiency of forward and backward search depends heavily upon the underlying implicit structure of the transition system which is induced by the problem task. This suggests that bidirectional search is inherently more suited for certain problems. Furthermore, we find that this aptitude for a certain search direction correlates with the domain, thereby providing a powerful analytic tool to a priori derive the effectiveness of certain search approaches.

In conclusion, even without intricate improvements the NBS algorithm is able to compete with A*. It therefore has further potential for future research. Additionally, the underlying transition system of a problem instance is shown to be an important factor which influences the efficiency of certain search approaches. This knowledge could be valuable for devising portfolio planners.

Multiple Sequence Alignment (MSA) is the problem of aligning multiple biological sequences in the evoluationary most plausible way. It can be viewed as a shortest path problem through an n-dimensional lattice. Because of its large branching factor of 2^n − 1, it has found broad attention in the artificial intelligence community. Finding a globally optimal solution for more than a few sequences requires sophisticated heuristics and bounding techniques in order to solve the problem in acceptable time and within memory limitations. In this thesis, we show how existing heuristics fall into the category of combining certain pattern databases. We combine arbitrary pattern collections that can be used as heuristic estimates and apply cost partitioning techniques from classical planning for MSA. We implement two of those heuristics for MSA and compare their estimates to the existing heuristics.

Increasing Cost Tree Search is a promising approach to multi-agent pathfinding problems, but like all approaches it has to deal with a huge number of possible joint paths, growing exponentially with the number of agents. We explore the possibility of reducing this by introducing a value abstraction to the Multi-valued Decision Diagrams used to represent sets of joint paths. To that end we introduce a heat map to heuristically judge how collisionprone agent positions are and present how to use and possible refine abstract positions in order to still find valid paths.

Estimating cheapest plan costs with the help of network flows is an established technique. Plans and network flows are already very similar, however network flows can differ from plans in the presence of cycles. If a transition system contains cycles, flows might be composed of multiple disconnected parts. This discrepancy can make the cheapest plan estimation worse. One idea to get rid of the cycles works by introducing time steps. For every time step the states of a transition system are copied. Transitions will be changed, so that they connect states only with states of the next time step, which ensures that there are no cycles. It turned out, that by applying this idea to multiple transitions systems, network flows of the individual transition systems can be synchronized via the time steps to get a new kind of heuristic, that will also be discussed in this thesis.

Probabilistic planning is a research field that has become popular in the early 1990s. It aims at finding an optimal policy which maximizes the outcome of applying actions to states in an environment that feature unpredictable events. Such environments can consist of a large number of states and actions which make finding an optimal policy intractable using classical methods. Using a heuristic function for a guided search allows for tackling such problems. Designing a domain-independent heuristic function requires complex algorithms which may be expensive when it comes to time and memory consumption.

In this thesis, we are applying the supervised learning techniques for learning two domain-independent heuristic functions. We use three types of gradient descent methods: stochastic, batch and mini-batch gradient descent and their improved versions using momen- tum, learning decay rate and early stopping. Furthermore, we apply the concept of feature combination in order to better learn the heuristic functions. The learned functions are pro- vided to Prost, a domain-independent probabilistic planner, and benchmarked against the winning algorithms of the International Probabilistic Planning Competition held in 2014. The experiments show that learning an offline heuristic improves the overall score of the search for some of the domains used in aforementioned competition.

The merge-and-shrink heuristic is a state-of-the-art admissible heuristic that is often used for optimal planning. Recent studies showed that the merge strategy is an important factor for the performance of the merge-and-shrink algorithm. There are many different merge strategies and improvements for merge strategies described in the literature. One out of these merge strategies is MIASM by Fan et al. MIASM tries to merge transition systems that produce unnecessary states in their product which can be pruned. Another merge strategy is the symmetry-based merge-and-shrink framework by Sievers et al. This strategy tries to merge transition systems that cause factored symmetries in their product. This strategy can be combined with other merge strategies and it often improves the performance for many merge strategy. However, the current combination of MIASM with factored symmetries performs worse than MIASM. We implement a different combination of MIASM that uses factored symmetries during the subset search of MIASM. Our experimental evaluation shows that our new combination of MIASM with factored symmetries solves more tasks than the existing MIASM and the previously implemented combination of MIASM with factored symmetries. We also evaluate different combinations of existing merge strategies and find combinations that perform better than their basic version that were not evaluated before.

Tree Cache is a pathfinding algorithm that selects one vertex as a root and constructs a tree with cheapest paths to all other vertices. A path is found by traversing up the tree from both the start and goal vertices to the root and concatenating the two parts. This is fast, but as all paths constructed this way pass through the root vertex they can be highly suboptimal.

To improve this algorithm, we consider two simple approaches. The first is to construct multiple trees, and save the distance to each root in each vertex. To find a path, the algorithm first selects the root with the lowest total distance. The second approach is to remove redundant vertices, i.e. vertices that are between the root and the lowest common ancestor (LCA) of the start and goal vertices. The performance and space requirements of the resulting algorithm are then compared to the conceptually similar hub labels and differential heuristics.

Greedy Best-First Search (GBFS) is a prominent search algorithm to find solutions for planning tasks. GBFS chooses nodes for further expansion based on a distance-to-goal estimator, the heuristic. This makes GBFS highly dependent on the quality of the heuristic. Heuristics often face the problem of producing Uninformed Heuristic Regions (UHRs). GBFS additionally suffers the possibility of simultaneously expanding nodes in multiple UHRs. In this thesis we change the heuristic approach in UHRs. The heuristic was unable to guide the search and so we try to expand novel states to escape the UHRs. The novelty measures how “new” a state is in the search. The result is a combination of heuristic and novelty guided search, which is indeed able to escape UHRs quicker and solve more problems in reasonable time.

In classical AI planning, the state explosion problem is a reoccurring subject: although the problem descriptions are compact, often a huge number of states needs to be considered. One way to tackle this problem is to use static pruning methods which reduce the number of variables and operators in the problem description before planning.

In this work, we discuss the properties and limitations of three existing static pruning techniques with a focus on satisficing planning. We analyse these pruning techniques and their combinations, and identify synergy effects between them and the domains and problem structures in which they occur. We implement the three methods into an existing propositional planner, and evaluate the performance of different configurations and combinations in a set of experiments on IPC benchmarks. We observe that static pruning techniques can increase the number of solved problems, and that the synergy effects of the combinations also occur on IPC benchmarks, although they do not lead to a major performance increase.

The goal of classical domain-independent planning is to find a sequence of actions which lead from a given initial state to a goal state that satisfies some goal criteria. Most planning systems use heuristic search algorithms to find such a sequence of actions. A critical part of heuristic search is the heuristic function. In order to find a sequence of actions from an initial state to a goal state efficiently this heuristic function has to guide the search towards the goal. It is difficult to create such an efficient heuristic function. Arfaee et al. show that it is possible to improve a given heuristic function by applying machine learning techniques on a single domain in the context of heuristic search. To achieve this improvement of the heuristic function, they propose a bootstrap learning approach which subsequently improves the heuristic function.

In this thesis we will introduce a technique to learn heuristic functions that can be used in classical domain-independent planning based on the bootstrap-learning approach introduced by Arfaee et al. In order to evaluate the performance of the learned heuristic functions, we have implemented a learning algorithm for the Fast Downward planning system. The experiments have shown that a learned heuristic function generally decreases the number of explored states compared to blind-search . The total time to solve a single problem increases because the heuristic function has to be learned before it can be applied.

Essential for the estimation of the performance of an algorithm in satisficing planning is its ability to solve benchmark problems. Those results can not be compared directly as they originate from different implementations and different machines. We implemented some of the most promising algorithms for greedy best-first search, published in the last years, and evaluated them on the same set of benchmarks. All algorithms are either based on randomised search, localised search or a combination of both. Our evaluation proves the potential of those algorithms.

Heuristic search with admissible heuristics is the leading approach to cost-optimal, domain-independent planning. Pattern database heuristics - a type of abstraction heuristics - are state-of-the-art admissible heuristics. Two recent pattern database heuristics are the iPDB heuristic by Haslum et al. and the PhO heuristic by Pommerening et al.

The iPDB procedure performs a hill climbing search in the space of pattern collections and evaluates selected patterns using the canonical heuristic. We apply different techniques to the iPDB procedure, improving its hill climbing algorithm as well as the quality of the resulting heuristic. The second recent heuristic - the PhO heuristic - obtains strong heuristic values through linear programming. We present different techniques to influence and improve on the PhO heuristic.

We evaluate the modified iPDB and PhO heuristics on the IPC benchmark suite and show that these abstraction heuristics can compete with other state-of-the-art heuristics in cost-optimal, domain-independent planning.

Greedy best-first search (GBFS) is a prominent search algorithm for satisficing planning - finding good enough solutions to a planning task in reasonable time. GBFS selects the next node to consider based on the most promising node estimated by a heuristic function. However, this behaviour makes GBFS heavily depend on the quality of the heuristic estimator. Inaccurate heuristics can lead GBFS into regions far away from a goal. Additionally, if the heuristic ranks several nodes the same, GBFS has no information on which node it shall follow. Diverse best-first search (DBFS) is a new algorithm by Imai and Kishimoto [2011] which has a local search component to emphasis exploitation. To enable exploration, DBFS deploys probabilities to select the next node.

In two problem domains, we analyse GBFS' search behaviour and present theoretical results. We evaluate these results empirically and compare DBFS and GBFS on constructed as well as on provided problem instances.

State-of-the-art planning systems use a variety of control knowledge in order to enhance the performance of heuristic search. Unfortunately most forms of control knowledge use a specific formalism which makes them hard to combine. There have been several approaches which describe control knowledge in Linear Temporal Logic (LTL). We build upon this work and propose a general framework for encoding control knowledge in LTL formulas. The framework includes a criterion that any LTL formula used in it must fulfill in order to preserve optimal plans when used for pruning the search space; this way the validity of new LTL formulas describing control knowledge can be checked. The framework is implemented on top of the Fast Downward planning system and is tested with a pruning technique called Unnecessary Action Application, which detects if a previously applied action achieved no useful progress.

Landmarks are known to be useable for powerful heuristics for informed search. In this thesis, we explain and evaluate a novel algorithm to find ordered landmarks of delete free tasks by intersecting solutions in the relaxation. The proposed algorithm efficiently finds landmarks and natural orders of delete free tasks, such as delete relaxations or Pi-m compilations.

Planning as heuristic search is the prevalent technique to solve planning problems of any kind of domains. Heuristics estimate distances to goal states in order to guide a search through large state spaces. However, this guidance is sometimes moderate, since still a lot of states lie on plateaus of equally prioritized states in the search space topology. Additional techniques that ignore or prefer some actions for solving a problem are successful to support the search in such situations. Nevertheless, some action pruning techniques lead to incomplete searches.

We propose an under-approximation refinement framework for adding actions to under-approximations of planning tasks during a search in order to find a plan. For this framework, we develop a refinement strategy. Starting a search on an initial under-approximation of a planning task, the strategy adds actions determined at states close to a goal, whenever the search does not progress towards a goal, until a plan is found. Key elements of this strategy consider helpful actions and relaxed plans for refinements. We have implemented the under-approximation refinement framework into the greedy best first search algorithm. Our results show considerable speedups for many classical planning problems. Moreover, we are able to plan with fewer actions than standard greedy best first search.

The main approach for classical planning is heuristic search. Many cost heuristics are based on the delete relaxation. The optimal heuristic of a delete free planning problem is called h + . This thesis explores two new ways to compute h + . Both approaches use factored planning, which decomposes the original planning problem to work on each subproblem separately. The algorithm reuses the subsolutions and combines them to a global solution.

The two algorithms are used to compute a cost heuristic for an A* search. As both approaches compute the optimal heuristic for delete free planning tasks, the algorithms can also be used to find a solution for relaxed planning tasks.

Multi-Agent-Path-Finding (MAPF) is a common problem in robotics and memory management. Pebbles in Motion is an implementation of a problem solver for MAPF in polynomial time, based on a work by Daniel Kornhauser from 1984. Recently a lot of research papers have been published on MAPF in the research community of Artificial Intelligence, but the work by Kornhauser seems hardly to be taken into account. We assumed that this might be related to the fact that said paper was more mathematically and hardly describing algorithms intuitively. This work aims at filling this gap, by providing an easy understandable approach of implementation steps for programmers and a new detailed description for researchers in Computer Science.

Bachelor's theses

Fast Downward is a classical planner using heuristical search. The planner uses many advanced planning techniques that are not easy to teach, since they usually rely on complex data structures. To introduce planning techniques to the user an interactive application is created. This application uses an illustrative example to showcase planning techniques: Blocksworld

Blocksworld is an easy understandable planning problem which allows a simple representation of a state space. It is implemented in the Unreal Engine and provides an interface to the Fast Downward planner. Users can explore a state space themselves or have Fast Downward generate plans for them. The concept of heuristics as well as the state space are explained and made accessible to the user. The user experiences how the planner explores a state space and which techniques the planner uses.

This thesis is about implementing Jussi Rintanen’s algorithm for schematic invariants. The algo- rithm is implemented in the planning tool Fast Downward and refers to Rintanen’s paper Schematic Invariants by Reduction to Ground Invariants. The thesis describes all necessary definitions to under- stand the algorithm and draws a comparison between the original task and a reduced task in terms of runtime and number of grounded actions.

Planning is a field of Artificial Intelligence. Planners are used to find a sequence of actions, to get from the initial state to a goal state. Many planning algorithms use heuristics, which allow the planner to focus on more promising paths. Pattern database heuristics allow us to construct such a heuristic, by solving a simplified version of the problem, and saving the associated costs in a pattern database. These pattern databases can be computed and stored by using symbolic data structures.

In this paper we will look at how pattern databases using symbolic data structures using binary decision diagrams and algebraic decision diagrams can be implemented. We will extend fast down- ward (Helmert [2006]) with it, and compare the performance of this implementation with the already implemented explicit pattern database.

In the field of automated planning and scheduling, a planning task is essentially a state space which can be defined rigorously using one of several different formalisms (e.g. STRIPS, SAS+, PDDL etc.). A planning algorithm tries to determine a sequence of actions that lead to a goal state for a given planning task. In recent years, attempts have been made to group certain planners together into so called planner portfolios, to try and leverage their effectiveness on different specific problem classes. In our project, we create an online planner which in contrast to its offline counterparts, makes use of task specific information when allocating a planner to a task. One idea that has recently gained interest, is to apply machine learning methods to planner portfolios.

In previous work such as Delfi (Katz et al., 2018; Sievers et al., 2019a) supervised learning techniques were used, which made it necessary to train multiple networks to be able to attempt multiple, potentially different, planners for a given task. The reason for this being that, if we used the same network, the output would always be the same, as the input to the network would remain unchanged. In this project we make use of techniques from rein- forcement learning such as DQNs (Mnih et al., 2013). Using RL approaches such as DQNs, allows us to extend the input to the network to include information on things, such as which planners were previously attempted and for how long. As a result multiple attempts can be made after only having trained a single network.

Unfortunately the results show that current reinforcement learning agents are, amongst other reasons, too sample inefficient to be able to deliver viable results given the size of the currently available data sets.

Planning tasks are important and difficult problems in computer science. A widely used approach is the use of delete relaxation heuristics to which the additive and FF heuristic belong. Those two heuristics use a graph in their calculation, which only has to be constructed once for a planning task but then can be used repeatedly. To solve such a problem efficiently it is important that the calculation of the heuristics are fast. In this thesis the idea to achieve a faster calculation is to combine redundant parts of the graph when building it to reduce the number of edges and therefore speed up the calculation. Here the reduction of the redundancies is done for each action within a planning task individually, but further ideas to simplify over all actions are also discussed.

Monte Carlo search methods are widely known, mostly for their success in game domains, although they are also applied to many non-game domains. In previous work done by Schulte and Keller, it was established that best-first searches could adapt to the action selection functionality which make Monte Carlo methods so formidable. In practice however, the trial-based best first search, without exploration, was shown to be slightly slower than its explicit open list counterpart. In this thesis we examine the non-trial and trial-based searches and how they can address the exploitation exploration dilemma. Lastly, we will see how trial-based BFS can rectify a slower search by allowing occasional random action selection, by comparing it to regular open list searches in a line of experiments.

Sudoku has become one of the world’s most popular logic puzzles, arousing interest in the general public and the science community. Although the rules of Sudoku may seem simple, they allow for nearly countless puzzle instances, some of which are very hard to solve. SAT-solvers have proven to be a suitable option to solve Sudokus automatically. However, they demand the puzzles to be encoded as logical formulae in Conjunctive Normal Form. In earlier work, such encodings have been successfully demonstrated for original Sudoku Puzzles. In this thesis, we present encodings for rather unconventional Sudoku Variants, developed by the puzzle community to create even more challenging solving experiences. Furthermore, we demonstrate how Pseudo-Boolean Constraints can be utilized to encode Sudoku Variants that follow rules involving sums. To implement an encoding of Pseudo-Boolean Constraints, we use Binary Decision Diagrams and Adder Networks and study how they compare to each other.

In optimal classical planning, informed search algorithms like A* need admissible heuristics to find optimal solutions. Counterexample-guided abstraction refinement (CEGAR) is a method used to generate abstractions that yield suitable abstraction heuristics iteratively. In this thesis, we propose a class of CEGAR algorithms for the generation of domain abstractions, which are a class of abstractions that rank in between projections and Cartesian abstractions regarding the grade of refinement they allow. As no known algorithm constructs domain abstractions, we show that our algorithm is competitive with CEGAR algorithms that generate one projection or Cartesian abstraction.

This thesis will look at Single-Player Chess as a planning domain using two approaches: one where we look at how we can encode the Single-Player Chess problem as a domain-independent (general-purpose AI) approach and one where we encode the problem as a domain-specific solver. Lastly, we will compare the two approaches by doing some experiments and comparing the results of the two approaches. Both the domain-independent implementation and the domain-specific implementation differ from traditional chess engines because the task of the agent is not to find the best move for a given position and colour, but the agent’s task is to check if a given chess problem has a solution or not. If the agent can find a solution, the given chess puzzle is valid. The results of both approaches were measured in experiments, and we found out that the domain-independent implementation is too slow and that the domain-specific implementation, on the other hand, can solve the given puzzles reliably, but it has a memory bottleneck rooted in the search method that was used.

Carcassonne is a tile-based board game with a large state space and a high branching factor and therefore poses a challenge to artificial intelligence. In the past, Monte Carlo Tree Search (MCTS), a search algorithm for sequential decision-making processes, has been shown to find good solutions in large state spaces. MCTS works by iteratively building a game tree according to a tree policy. The profitability of paths within that tree is evaluated using a default policy, which influences in what directions the game tree is expanded. The functionality of these two policies, as well as other factors, can be implemented in many different ways. In consequence, many different variants of MCTS exist. In this thesis, we applied MCTS to the domain of two-player Carcassonne and evaluated different variants in regard to their performance and runtime. We found significant differences in performance for various variable aspects of MCTS and could thereby evaluate a configuration which performs best on the domain of Carcassonne. This variant consistently outperformed an average human player with a feasible runtime.

In general, it is important to verify software as it is prone to error. This also holds for solving tasks in classical planning. So far, plans in general as well as the fact that there is no plan for a given planning task can be proven and independently verified. However, no such proof for the optimality of a solution of a task exists. Our aim is to introduce two methods with which optimality can be proven and independently verified. We first reduce unit cost tasks to unsolvable tasks, which enables us to make use of the already existing certificates for unsolvability. In a second approach, we propose a proof system for optimality, which enables us to infer that the determined cost of a task is optimal. This permits the direct generation of optimality certificates.

Pattern databases are one of the most powerful heuristics in classical planning. They evaluate the perfect cost for a simplified sub-problem. The post-hoc optimization heuristic is a technique on how to optimally combine a set of pattern databases. In this thesis, we will adapt the post-hoc optimization heuristic for the sliding tile puzzle. The sliding tile puzzle serves as a benchmark to compare the post-hoc optimization heuristic to already established methods, which also deal with the combining of pattern databases. We will then show how the post-hoc optimization heuristic is an improvement over the already established methods.

In this thesis, we generate landmarks for a logistics-specific task. Landmarks are actions that need to occur at least once in every plan. A landmark graph denotes a structure with landmarks and their edges called orderings. If there are cycles in a landmark graph, one of those landmarks needs to be achieved at least twice for every cycle. The generation of the logistics-specific landmarks and their orderings calculate the cyclic landmark heuristic. The task is to pick up on related work, the evaluation of the cyclic landmark heuristic. We compare the generation of landmark graphs from a domain-independent landmark generator to a domain-specific landmark generator, the latter being the focus. We aim to bridge the gap between domain-specific and domain-independent landmark generators. In this thesis, we compare one domain-specific approach for the logistics domain with results from a domain- independent landmark generator. We devise a unit to pre-process data for other domain- specific tasks as well. We will show that specificity is better suited than independence.

Lineare Programmierung ist eine mathematische Modellierungstechnik, bei der eine lineare Funktion, unter der Berücksichtigung verschiedenen Beschränkungen, maximiert oder minimiert werden soll. Diese Technik ist besonders nützlich, falls Entscheidungen für Optimierungsprobleme getroffen werden sollen. Ziel dieser Arbeit war es ein Tool für das Spiel Factory Town zu entwickeln, mithilfe man Optimierungsanfragen bearbeiten kann. Dabei ist es möglich wahlweise zwischen diversen Fragestellungen zu wählen und anhand von LP-\ IP-Solvern diese zu beantworten. Zudem wurden die mathematischen Formulierungen, sowie die Unterschiede beider Methoden angegangen. Schlussendlich unterstrichen die generierten Resultate, dass LP Lösungen mindestens genauso gut oder sogar besser seien als die Lösungen eines IP.

Symbolic search is an important approach to classical planning. Symbolic search uses search algorithms that process sets of states at a time. For this we need states to be represented by a compact data structure called knowledge compilations. Merge-and-shrink representations come a different field of planning, where they have been used to derive heuristic functions for state-space search. More generally they represent functions that map variable assignments to a set of values, as such we can regard them as a data structure we will call Factored Mappings. In this thesis, we will investigate Factored Mappings (FMs) as a knowledge compilation language with the hope of using them for symbolic search. We will analyse the necessary transformations and queries for FMs, by defining the needed operations and a canonical representation of FMs, and showing that they run in polynomial time. We will then show that it is possible to use Factored Mappings as a knowledge compilation for symbolic search by defining a symbolic search algorithm for a finite-domain plannings task that works with FMs.

Version control systems use a graph data structure to track revisions of files. Those graphs are mutated with various commands by the respective version control system. The goal of this thesis is to formally define a model of a subset of Git commands which mutate the revision graph, and to model those mutations as a planning task in the Planning Domain Definition Language. Multiple ways to model those graphs will be explored and those models will be compared by testing them using a set of planners.

Pattern Databases are admissible abstraction heuristics for classical planning. In this thesis we are introducing the Boosting processes, which consists of enlarging the pattern of a Pattern Database P, calculating a more informed Pattern Database P' and then min-compress P' to the size of P resulting in a compressed and still admissible Pattern Database P''. We design and implement two boosting algorithms, Hillclimbing and Randomwalk.

We combine pattern database heuristics using five different cost partitioning methods. The experiments compare computing cost partitionings over regular and boosted pattern databases. The experiments, performed on IPC (optimal track) tasks, show promising results which increased the coverage (number of solved tasks) by 9 for canonical cost partitioning using our Randomwalk boosting variant.

One dimensional potential heuristics assign a numerical value, the potential, to each fact of a classical planning problem. The heuristic value of a state is the sum over the poten- tials belonging to the facts contained in the state. Fišer et al. (2020) recently proposed to strengthen potential heuristics utilizing mutexes and disambiguations. In this thesis, we embed the same enhancements in the planning system Fast Downward. The experi- mental evaluation shows that the strengthened potential heuristics are a refinement, but too computationally expensive to solve more problems than the non-strengthened potential heuristics.

The potentials are obtained with a Linear Program. Fišer et al. (2020) introduced an additional constraint on the initial state and we propose additional constraints on random states. The additional constraints improve the amount of solved problems by up to 5%.

This thesis discusses the PINCH heuristic, a specific implementation of the additive heuristic. PINCH intends to combine the strengths of existing implementations of the additive heuristic. The goal of this thesis is to really dig into the PINCH heuristic. I want to provide the most accessible resource for understanding PINCH and I want to analyze the performance of PINCH by comparing it to the algorithm on which it is based, Generalized Dijkstra.

Suboptimal search algorithms can offer attractive benefits compared to optimal search, namely increased coverage of larger search problems and quicker search times. Improving on such algorithms, such as reducing costs further towards optimal solutions and reducing the number of node expansions, is therefore a compelling area for further research. This paper explores the utility and scalability of recently developed priority functions, XDP, XUP, and PWXDP, and the Improved Optimistic Search algorithm, compared to Weighted A*, in the Fast Downward planner. Analyses focus on the cost, total time, coverage, and node expansion parameters, with experimental evidence suggesting preferable performance if strict optimality is not desired. The implementation of priorityb functions in eager best-first search showed marked improvements compared to A* search on coverage, total time, and number of expansions, without significant cost penalties. Following previous suboptimal search research, experimental evidence even seems to indicate that these cost penalties do not reach the designated bound, even in larger search spaces.

In the Automated Planning field, algorithms and systems are developed for exploring state spaces and ultimately finding an action sequence leading from a task’s initial state to its goal. Such planning systems may sometimes show unexpected behavior, caused by a planning task or a bug in the planner itself. Generally speaking, finding the source of a bug tends to be easier when the cause can be isolated or simplified. In this thesis, we tackle this problem by making PDDL and SAS+ tasks smaller while ensuring they still invoke a certain characteristic when executed with a planner. We implement a system that successively removes elements, such as objects, from a task and checks whether the transformed task still fails on the planner. Elements are removed in a syntactically consistent way, however, no semantic integrity is enforced. Our system’s design is centered around the Fast Downward Planning System, as we re-use some of its translator modules and all test runs are performed with Fast Downward. At the core of our system, first-choice hill-climbing is used for optimization. Our “minimizer” takes (1) a failing planner execution command, (2) a description of the failing characteristic and (3) the type of element to be deleted as arguments. We evaluate our system’s functionality on the basis of three use-cases. In our most successful test runs, (1) a SAS+ task with initially 1536 operators and 184 variables is reduced to 2 operators and 2 variables and (2)a PDDL task with initially 46 actions, 62 objects and 29 predicate symbols is reduced to 2 actions, 6 objects and 4 predicates.

Fast Downward is a classical planning system based on heuristic search. Its successor generator is an efficient and intelligent tool to process state spaces and generate their successor states. In this thesis we implement different successor generators in the Fast Downward planning system and compare them against each other. Apart from the given fast downward successor generator we implement four other successor generators: a naive successor generator, one based on the marking of delete relaxed heuristics, one based on the PSVN planning system and one based on watched literals as used in modern SAT solvers. These successor generators are tested in a variety of different planning benchmarks to see how well they compete against each other. We verified that there is a trade-off between precomputation and faster successor generation and showed that all of the implemented successor generators have a use case and it is advisable to switch to a successor generator that fits the style of the planning task.

Verifying whether a planning algorithm came to the correct result for a given planning task is easy if a plan is emitted which solves the problem. But if a task is unsolvable most planners just state this fact without any explanation or even proof. In this thesis we present extended versions of the symbolic search algorithms SymPA and symbolic bidirectional uniform-cost search which, if a given planning task is unsolvable, provide certificates which prove unsolvability. We also discuss a concrete implementation of this version of SymPA.

Classical planning is an attractive approach to solving problems because of its generality and its relative ease of use. Domain-specific algorithms are appealing because of their performance, but require a lot of resources to be implemented. In this thesis we evaluate concepts languages as a possible input language for expert domain knowledge into a planning system. We also explore mixed integer programming as a way to use this knowledge to improve search efficiency and to help the user find and refine useful domain knowledge.

Classical Planning is a branch of artificial intelligence that studies single agent, static, deterministic, fully observable, discrete search problems. A common challenge in this field is the explosion of states to be considered when searching for the goal. One technique that has been developed to mitigate this is Strong Stubborn Set based pruning, where on each state expansion, the considered successors are restricted to Strong Stubborn Sets, which exploit the properties of independent operators to cut down the tree or graph search. We adopt the definitions of the theory of Strong Stubborn Sets from the SAS+ setting to transition systems and validate a central theorem about the correctness of Strong Stubborn Set based pruning for transition systems in the interactive theorem prover Isabelle/HOL.

Ein wichtiges Feld in der Wissenschaft der künstliche Intelligenz sind Planungsprobleme. Man hat das Ziel, eine künstliche intelligente Maschine zu bauen, die mit so vielen ver- schiedenen Probleme umgehen und zuverlässig lösen kann, indem sie ein optimaler Plan herstellt.

Der Trial-based Heuristic Tree Search(THTS) ist ein mächtiges Werkzeug um Multi-Armed- Bandit-ähnliche Probleme, Marcow Decsision Processe mit verändernden Rewards, zu lösen. Beim momentanen THTS können explorierte gefundene gute Rewards auf Grund von der grossen Anzahl der Rewards nicht beachtet werden. Ebenso können beim explorieren schlech- te Rewards, gute Knoten im Suchbaum, verschlechtern. Diese Arbeit führt eine Methodik ein, die von der stückweise stationären MABs Problematik stammt, um den THTS weiter zu optimieren.

Abstractions are a simple yet powerful method of creating a heuristic to solve classical planning problems optimally. In this thesis we make use of Cartesian abstractions generated with Counterexample-Guided Abstraction Refinement (CEGAR). This method refines abstractions incrementally by finding flaws and then resolving them until the abstraction is sufficiently evolved. The goal of this thesis is to implement and evaluate algorithms which select solutions of such flaws, in a way which results in the best abstraction (that is, the abstraction which causes the problem to then be solved most efficiently by the planner). We measure the performance of a refinement strategy by running the Fast Downward planner on a problem and measuring how long it takes to generate the abstraction, as well as how many expansions the planner requires to find a goal using the abstraction as a heuristic. We use a suite of various benchmark problems for evaluation, and we perform this experiment for a single abstraction and on abstractions for multiple subtasks. Finally, we attempt to predict which refinement strategy should be used based on parameters of the task, potentially allowing the planner to automatically select the best strategy at runtime.

Heuristic search is a powerful paradigm in classical planning. The information generated by heuristic functions to guide the search towards a goal is a key component of many modern search algorithms. The paper “Using Backwards Generated Goals for Heuristic Planning” by Alcázar et al. proposes a way to make additional use of this information. They take the last actions of a relaxed plan as a basis to generate intermediate goals with a known path to the original goal. A plan is found when the forward search reaches an intermediate goal.

The premise of this thesis is to modify their approach by focusing on a single sequence of intermediate goals. The aim is to improve efficiency while preserving the benefits of backwards goal expansion. We propose different variations of our approach by introducing multiple ways to make decisions concerning the construction of intermediate goals. We evaluate these variations by comparing their performance and illustrate the challenges posed by this approach.

Counterexample-guided abstraction refinement (CEGAR) is a way to incrementally compute abstractions of transition systems. It starts with a coarse abstraction and then iteratively finds an abstract plan, checks where the plan fails in the concrete transition system and refines the abstraction such that the same failure cannot happen in subsequent iterations. As the abstraction grows in size, finding a solution for the abstract system becomes more and more costly. Because the abstraction grows incrementally, however, it is possible to maintain heuristic information about the abstract state space, allowing the use of informed search algorithms like A*. As the quality of the heuristic is crucial to the performance of informed search, the method for maintaining the heuristic has a significant impact on the performance of the abstraction refinement as a whole. In this thesis, we investigate different methods for maintaining the value of the perfect heuristic h* at all times and evaluate their performance.

Pattern Databases are a powerful class of abstraction heuristics which provide admissible path cost estimates by computing exact solution costs for all states of a smaller task. Said task is obtained by abstracting away variables of the original problem. Abstractions with few variables offer weak estimates, while introduction of additional variables is guaranteed to at least double the amount of memory needed for the pattern database. In this thesis, we present a class of algorithms based on counterexample-guided abstraction refinement (CEGAR), which exploit additivity relations of patterns to produce pattern collections from which we can derive heuristics that are both informative and computationally tractable. We show that our algorithms are competitive with already existing pattern generators by comparing their performance on a variety of planning tasks.

We consider the problem of Rubik’s Cube to evaluate modern abstraction heuristics. In order to find feasible abstractions of the enormous state space spanned by Rubik’s Cube, we apply projection in the form of pattern databases, Cartesian abstraction by doing counterexample guided abstraction refinement as well as merge-and-shrink strategies. While previous publications on Cartesian abstractions have not covered applicability for planning tasks with conditional effects, we introduce factorized effect tasks and show that Cartesian abstraction can be applied to them. In order to evaluate the performance of the chosen heuristics, we run experiments on different problem instances of Rubik’s Cube. We compare them by the initial h-value found for all problems and analyze the number of expanded states up to the last f-layer. These criteria provide insights about the informativeness of the considered heuristics. Cartesian Abstraction yields perfect heuristic values for problem instances close to the goal, however it is outperformed by pattern databases for more complex instances. Even though merge-and-shrink is the most general abstraction among the considered, it does not show better performance than the others.

Probabilistic planning expands on classical planning by tying probabilities to the effects of actions. Due to the exponential size of the states, probabilistic planners have to come up with a strong policy in a very limited time. One approach to optimising the policy that can be found in the available time is called metareasoning, a technique aiming to allocate more deliberation time to steps where more time to plan results in an improvement of the policy and less deliberation time to steps where an improvement of the policy with more time to plan is unlikely.

This thesis aims to adapt a recent proposal of a formal metareasoning procedure from Lin. et al. for the search algorithm BRTDP to work with the UCT algorithm in the Prost planner and compare its viability to the current standard and a number of less informed time management methods in order to find a potential improvement to the current uniform deliberation time distribution.

A planner tries to produce a policy that leads to a desired goal given the available range of actions and an initial state. A traditional approach for an algorithm is to use abstraction. In this thesis we implement the algorithm described in the ASAP-UCT paper: Abstraction of State-Action Pairs in UCT by Ankit Anand, Aditya Grover, Mausam and Parag Singla.

The algorithm combines state and state-action abstraction with a UCT-algorithm. We come to the conclusion that the algorithm needs to be improved because the abstraction of action-state often cannot detect a similarity that a reasonable action abstraction could find.

The notion of adding a form of exploration to guide a search has been proven to be an effective method of combating heuristical plateaus and improving the performance of greedy best-first search. The goal of this thesis is to take the same approach and introduce exploration in a bounded suboptimal search problem. Explicit estimation search (EES), established by Thayer and Ruml, consults potentially inadmissible information to determine the search order. Admissible heuristics are then used to guarantee the cost bound. In this work we replace the distance-to-go estimator used in EES with an approach based on the concept of novelty.

Classical domain-independent planning is about finding a sequence of actions which lead from an initial state to a goal state. A popular approach for solving planning problems efficiently is to utilize heuristic functions. A possible heuristic function is the perfect heuristic of a delete relaxed planning problem denoted as h+. Delete relaxation simplifies the planning problem thus making it easier to find a perfect heuristic. However computing h+ is still NP-hard problem.

In this thesis we discuss a promising looking approach to compute h+ in practice. Inspired by the paper from Gnad, Hoffmann and Domshlak about star-shaped planning problems, we implemented the Flow-Cut algorithm. The basic idea behind flow-cut to divide a problem that is unsolvable in practice, into smaller sub problems that can be solved. We further tested the flow-cut algorithm on the domains provided by the International Planning Competition benchmarks, resulting in the following conclusion: Using a divide and conquer approach can successfully be used to solve classical planning problems, however it is not trivial to design such an algorithm to be more efficient than state-of-the-art search algorithm.

This thesis deals with the algorithm presented in the paper "Landmark-based Meta Best-First Search Algorithm: First Parallelization Attempt and Evaluation" by Simon Vernhes, Guillaume Infantes and Vincent Vidal. Their idea was to reconsider the approach to landmarks as a tool in automated planning, but in a markedly different way than previous work had done. Their result is a meta-search algorithm which explores landmark orderings to find a series of subproblems that reliably lead to an effective solution. Any complete planner may be used to solve the subproblems. While the referenced paper also deals with an attempt to effectively parallelize the Landmark-based Meta Best-First Search Algorithm, this thesis is concerned mainly with the sequential implementation and evaluation of the algorithm in the Fast Downward planning system.

Heuristics play an important role in classical planning. Using heuristics during state space search often reduces the time required to find a solution, but constructing heuristics and using them to calculate heuristic values takes time, reducing this benefit. Constructing heuristics and calculating heuristic values as quickly as possible is very important to the effectiveness of a heuristic. In this thesis we introduce methods to bound the construction of merge-and-shrink to reduce its construction time and increase its accuracy for small problems and to bound the heuris- tic calculation of landmark cut to reduce heuristic value calculation time. To evaluate the performance of these depth-bound heuristics we have implemented them in the Fast Down- ward planning system together with three iterative-deepening heuristic search algorithms: iterative-deepening A* search, a new breadth-first iterative-deepening version of A* search and iterative-deepening breadth-first heuristic search.

Greedy best-first search has proven to be a very efficient approach to satisficing planning but can potentially lose some of its effectiveness due to the used heuristic function misleading it to a local minimum or plateau. This is where exploration with additional open lists comes in, to assist greedy best-first search with solving satisficing planning tasks more effectively. Building on the idea of exploration by clustering similar states together as described by Xie et al. [2014], where states are clustered according to heuristic values, we propose in this paper to instead cluster states based on the Hamming distance of the binary representation of states [Hamming, 1950]. The resulting open list maintains k buckets and inserts each given state into the bucket with the smallest average hamming distance between the already clustered states and the new state. Additionally, our open list is capable of reclustering all states periodically with the use of the k-means algorithm. We were able to achieve promising results concerning the amount of expansions necessary to reach a goal state, despite not achieving a higher coverage than fully random exploration due to slow performance. This was caused by the amount of calculations required to identify the most fitting cluster when inserting a new state.

Monte Carlo Tree Search Algorithms are an efficient method of solving probabilistic planning tasks that are modeled by Markov Decision Problems. MCTS uses two policies, a tree policy for iterating through the known part of the decission tree and a default policy to simulate the actions and their reward after leaving the tree. MCTS algorithms have been applied with great success to computer Go. To make the two policies fast many enhancements based on online knowledge have been developed. The goal of All Moves as First enhancements is to improve the quality of a reward estimate in the tree policy. In the context of this thesis the, in the field of computer Go very efficient, α-AMAF, Cutoff-AMAF as well as Rapid Action Value Estimation enhancements are implemented in the probabilistic planner PROST. To obtain a better default policy, Move Average Sampling is implemented into PROST and benchmarked against it’s current default policies.

In classical planning the objective is to find a sequence of applicable actions that lead from the initial state to a goal state. In many cases the given problem can be of enormous size. To deal with these cases, a prominent method is to use heuristic search, which uses a heuristic function to evaluate states and can focus on the most promising ones. In addition to applying heuristics, the search algorithm can apply additional pruning techniques that exclude applicable actions in a state because applying them at a later point in the path would result in a path consisting of the same actions but in a different order. The question remains as to how these actions can be selected without generating too much additional work to still be useful for the overall search. In this thesis we implement and evaluate the partition-based path pruning method, proposed by Nissim et al. [1], which tries to decompose the set of all actions into partitions. Based on this decomposition, actions can be pruned with very little additional information. The partition-based pruning method guarantees with some alterations to the A* search algorithm to preserve it’s optimality. The evaluation confirms that in several standard planning domains, the pruning method can reduce the size of the explored state space.

Validating real-time systems is an important and complex task which becomes exponentially harder with increasing sizes of systems. Therefore finding an automated approach to check real-time systems for possible errors is crucial. The behaviour of such real-time systems can be modelled with timed automata. This thesis adapts and implements the under-approximation refinement algorithm developed for search based planners proposed by Heusner et al. to find error states in timed automata via the directed model checking approach. The evaluation compares the algorithm to already existing search methods and shows that a basic under-approximation refinement algorithm yields a competitive search method for directed model checking which is both fast and memory efficient. Additionally we illustrate that with the introduction of some minor alterations the proposed under- approximation refinement algorithm can be further improved.

In dieser Arbeit wird versucht eine Heuristik zu lernen. Damit eine Heuristik erlernbar ist, muss sie über Parameter verfügen, die die Heuristik bestimmen. Eine solche Möglichkeit bieten Potential-Heuristiken und ihre Parameter werden Potentiale genannt. Pattern-Databases können mit vergleichsweise wenig Aufwand Eigenschaften eines Zustandsraumes erkennen und können somit eingesetzt werden als Grundlage um Potentiale zu lernen. Diese Arbeit untersucht zwei verschiedene Ansätze zum Erlernen der Potentiale aufgrund der Information aus Pattern-Databases. In Experimenten werden die beiden Ansätze genauer untersucht und schliesslich mit der FF-Heuristik verglichen.

We consider real-time strategy (RTS) games which have temporal and numerical aspects and pose challenges which have to be solved within limited search time. These games are interesting for AI research because they are more complex than board games. Current AI agents cannot consistently defeat average human players, while even the best players make mistakes we think an AI could avoid. In this thesis, we will focus on StarCraft Brood War. We will introduce a formal definition of the model Churchill and Buro proposed for StarCraft. This allows us to focus on Build Order optimization only. We have implemented a base version of the algorithm Churchill and Buro used for their agent. Using the implementation we are able to find solutions for Build Order Problems in StarCraft Brood War.

Auf dem Gebiet der Handlungsplanung stellt die symbolische Suche eine der erfolgversprechendsten angewandten Techniken dar. Um eine symbolische Suche auf endlichen Zustandsräumen zu implementieren bedarf es einer geeigneten Datenstruktur für logische Formeln. Diese Arbeit erprobt die Nutzung von Sentential Decision Diagrams (SDDs) anstelle der gängigen Binary Decision Diagrams (BDDs) zu diesem Zweck. SDDs sind eine Generalisierung von BDDs. Es wird empirisch getestet wie eine Implementierung der symbolischen Suche mit SDDs im FastDownward-Planer sich mit verschiedenen vtrees unterscheidet. Insbesondere wird die Performance von balancierten vtrees, mit welchen die Stärken von SDDs oft gut zur Geltung kommen, mit rechtsseitig linearen vtrees verglichen, bei welchen sich SDDs wie BDDs verhalten.

Die Frage ob es gültige Sudokus - d.h. Sudokus mit nur einer Lösung - gibt, die nur 16 Vorgaben haben, konnte im Dezember 2011 mithilfe einer erschöpfenden Brute-Force-Methode von McGuire et al. verneint werden. Die Schwierigkeit dieser Aufgabe liegt in dem ausufernden Suchraum des Problems und der dadurch entstehenden Erforderlichkeit einer effizienten Beweisidee sowie schnellerer Algorithmen. In dieser Arbeit wird die Beweismethode von McGuire et al. bestätigt werden und für 2 2 × 2 2 und 3 2 × 3 2 Sudokus in C++ implementiert.

Das Finden eines kürzesten Pfades zwischen zwei Punkten ist ein fundamentales Problem in der Graphentheorie. In der Praxis ist es oft wichtig, den Ressourcenverbrauch für das Ermitteln eines solchen Pfades minimal zu halten, was mithilfe einer komprimierten Pfaddatenbank erreicht werden kann. Im Rahmen dieser Arbeit bestimmen wir drei Verfahren, mit denen eine Pfaddatenbank möglichst platzsparend aufgestellt werden kann, und evaluieren die Effektivität dieser Verfahren anhand von Probleminstanzen verschiedener Grösse und Komplexität.

In planning what we want to do is to get from an initial state into a goal state. A state can be described by a finite number of boolean valued variables. If we want to transition from one state to the other we have to apply an action and this, at least in probabilistic planning, leads to a probability distribution over a set of possible successor states. From each transition the agent gains a reward dependent on the current state and his action. In this setting the growth of the number of possible states is exponential with the number of variables. We assume that the value of these variables is determined for each variable independently in a probabilistic fashion. So these variables influence the number of possible successor states in the same way as they did the state space. In consequence it is almost impossible to obtain an optimal amount of reward approaching this problem with a brute force technique. One way to get past this problem is to abstract the problem and then solve a simplified version of the aforementioned. That’s in general the idea proposed by Boutilier and Dearden [1]. They have introduced a method to create an abstraction which depends on the reward formula and the dependencies contained in the problem. With this idea as a basis we’ll create a heuristic for a trial-based heuristic tree search (THTS) algorithm [5] and a standalone planner using the framework PROST (Keller and Eyerich, 2012). These will then be tested on all the domains of the International Probabilistic Planning Competition (IPPC).

In einer Planungsaufgabe geht es darum einen gegebenen Wertezustand durch sequentielles Anwenden von Aktionen in einen Wertezustand zu überführen, welcher geforderte Zieleigenschaften erfüllt. Beim Lösen von Planungsaufgaben zählt Effizienz. Um Zeit und Speicher zu sparen verwenden viele Planer heuristische Suche. Dabei wird mittels einer Heuristik abgeschätzt, welche Aktion als nächstes angewendet werden soll um möglichst schnell in einen gewünschten Zustand zu gelangen.

In dieser Arbeit geht es darum, die von Haslum vorgeschlagene P m -Kompilierung für Planungsaufgaben zu implementieren und die h max -Heuristik auf dem kompilierten Problem gegen die h m -Heuristik auf dem originalen Problem zu testen. Die Implementation geschieht als Ergänzung zum Fast-Downward-Planungssystem. Die Resultate der Tests zeigen, dass mittels der Kompilierung die Zahl der gelösten Probleme erhöht werden kann. Das Lösen eines kompilierten Problems mit der h max -Heuristik geschieht im allgemeinen mit selbiger Informationstiefe schneller als das Lösen des originalen Problems mit der h m -Heuristik. Diesen Zeitgewinn erkauft man sich mit einem höheren Speicherbedarf.

The objective of classical planning is to find a sequence of actions which begins in a given initial state and ends in a state that satisfies a given goal condition. A popular approach to solve classical planning problems is based on heuristic forward search algorithms. In contrast, regression search algorithms apply actions “backwards” in order to find a plan from a goal state to the initial state. Currently, regression search algorithms are somewhat unpopular, as the generation of partial states in a basic regression search often leads to a significant growth of the explored search space. To tackle this problem, state subsumption is a pruning technique that additionally discards newly generated partial states for which a more general partial state has already been explored.

In this thesis, we discuss and evaluate techniques of regression and state subsumption. In order to evaluate their performance, we have implemented a regression search algorithm for the planning system Fast Downward, supporting both a simple subsumption technique as well as a refined subsumption technique using a trie data structure. The experiments have shown that a basic regression search algorithm generally increases the number of explored states compared to uniform-cost forward search. Regression with pruning based on state subsumption with a trie data structure significantly reduces the number of explored states compared to basic regression.

This thesis discusses the Traveling Tournament Problem and how it can be solved with heuristic search. The Traveling Tournament problem is a sports scheduling problem where one tries to find a schedule for a league that meets certain constraints while minimizing the overall distance traveled by the teams in this league. It is hard to solve for leagues with many teams involved since its complexity grows exponentially in the number of teams. The largest instances solved up to date, are instances with leagues of up to 10 teams.

Previous related work has shown that it is a reasonable approach to solve the Traveling Tournament Problem with an IDA*-based tree search. In this thesis I implemented such a search and extended it with several enhancements to examine whether they improve performance of the search. The heuristic that I used in my implementation is the Independent Lower Bound heuristic. It tries to find lower bounds to the traveling costs of each team in the considered league. With my implementation I was able to solve problem instances with up to 8 teams. The results of my evaluation have mostly been consistent with the expected impact of the implemented enhancements on the overall performance.

One huge topic in Artificial Intelligence is the classical planning. It is the process of finding a plan, therefore a sequence of actions that leads from an initial state to a goal state for a specified problem. In problems with a huge amount of states it is very difficult and time consuming to find a plan. There are different pruning methods that attempt to lower the amount of time needed to find a plan by trying to reduce the number of states to explore. In this work we take a closer look at two of these pruning methods. Both of these methods rely on the last action that led to the current state. The first one is the so called tunnel pruning that is a generalisation of the tunnel macros that are used to solve Sokoban problems. The idea is to find actions that allow a tunnel and then prune all actions that are not in the tunnel of this action. The second method is the partition-based path pruning. In this method all actions are distributed into different partitions. These partitions then can be used to prune actions that do not belong to the current partition.

The evaluation of these two pruning methods show, that they can reduce the number of explored states for some problem domains, however the difference between pruned search and normal search gets smaller when we use heuristic functions. It also shows that the two pruning rules effect different problem domains.

Ziel klassischer Handlungsplanung ist es auf eine möglichst effiziente Weise gegebene Planungsprobleme zu lösen. Die Lösung bzw. der Plan eines Planungsproblems ist eine Sequenz von Operatoren mit denen man von einem Anfangszustand in einen Zielzustand gelangt. Um einen Zielzustand gezielter zu finden, verwenden einige Suchalgorithmen eine zusätzliche Information über den Zustandsraum - die Heuristik. Sie schätzt, ausgehend von einem Zustand den Abstand zum Zielzustand. Demnach wäre es ideal, wenn jeder neue besuchte Zustand einen kleineren heuristischen Wert aufweisen würde als der bisher besuchte Zustand. Es gibt allerdings Suchszenarien bei denen die Heuristik nicht weiterhilft um einem Ziel näher zu kommen. Dies ist insbesondere dann der Fall, wenn sich der heuristische Wert von benachbarten Zuständen nicht ändert. Für die gierige Bestensuche würde das bedeuten, dass die Suche auf Plateaus und somit blind verläuft, weil sich dieser Suchalgorithmus ausschliesslich auf die Heuristik stützt. Algorithmen, die die Heuristik als Wegweiser verwenden, gehören zur Klasse der heuristischen Suchalgorithmen.

In dieser Arbeit geht es darum, in Fällen wie den Plateaus trotzdem eine Orientierung im Zustandsraum zu haben, indem Zustände neben der Heuristik einer weiteren Priorisierung unterliegen. Die hier vorgestellte Methode nutzt Abhängigkeiten zwischen Operatoren aus und erweitert die gierige Bestensuche. Wie stark Operatoren voneinander abhängen, betrachten wir anhand eines Abstandsmasses, welches vor der eigentlichen Suche berechnet wird. Die grundlegende Idee ist, Zustände zu bevorzugen, deren Operatoren im Vorfeld voneinander profitierten. Die Heuristik fungiert hierbei erst im Nachhinein als Tie-Breaker, sodass wir einem vielversprechenden Pfad zunächst folgen können, ohne dass uns die Heuristik an einer anderen, weniger vielversprechenden Stelle suchen lässt.

Die Ergebnisse zeigen, dass unser Ansatz in der reinen Suchzeit je nach Heuristik performanter sein kann, als wenn man sich ausschliesslich auf die Heuristik stützt. Bei sehr informationsreichen Heuristiken kann es jedoch passieren, dass die Suche durch unseren Ansatz eher gestört wird. Zudem werden viele Probleme nicht gelöst, weil die Berechnung der Abstände zu zeitaufwändig ist.

In classical planning, heuristic search is a popular approach to solving problems very efficiently. The objective of planning is to find a sequence of actions that can be applied to a given problem and that leads to a goal state. For this purpose, there are many heuristics. They are often a big help if a problem has a solution, but what happens if a problem does not have one? Which heuristics can help proving unsolvability without exploring the whole state space? How efficient are they? Admissible heuristics can be used for this purpose because they never overestimate the distance to a goal state and are therefore able to safely cut off parts of the search space. This makes it potentially easier to prove unsolvability

In this project we developed a problem generator to automatically create unsolvable problem instances and used those generated instances to see how different admissible heuristics perform on them. We used the Japanese puzzle game Sokoban as the first problem because it has a high complexity but is still easy to understand and to imagine for humans. As second problem, we used a logistical problem called NoMystery because unlike Sokoban it is a resource constrained problem and therefore a good supplement to our experiments. Furthermore, unsolvability occurs rather 'naturally' in these two domains and does not seem forced.

Sokoban is a computer game where each level consists of a two-dimensional grid of fields. There are walls as obstacles, moveable boxes and goal fields. The player controls the warehouse worker (Sokoban in Japanese) to push the boxes to the goal fields. The problem is very complex and that is why Sokoban has become a domain in planning.

Phase transitions mark a sudden change in solvability when traversing through the problem space. They occur in the region of hard instances and have been found for many domains. In this thesis we investigate phase transitions in the Sokoban puzzle. For our investigation we generate and evaluate random instances. We declare the defining parameters for Sokoban and measure their influence on the solvability. We show that phase transitions in the solvability of Sokoban can be found and their occurrence is measured. We attempt to unify the parameters of Sokoban to get a prediction on the solvability and hardness of specific instances.

In planning, we address the problem of automatically finding a sequence of actions that leads from a given initial state to a state that satisfies some goal condition. In satisficing planning, our objective is to find plans with preferably low, but not necessarily the lowest possible costs while keeping in mind our limited resources like time or memory. A prominent approach for satisficing planning is based on heuristic search with inadmissible heuristics. However, depending on the applied heuristic, plans found with heuristic search might be of low quality, and hence, improving the quality of such plans is often desirable. In this thesis, we adapt and apply iterative tunneling search with A* (ITSA*) to planning. ITSA* is an algorithm for plan improvement which has been originally proposed by Furcy et al. for search problems. ITSA* intends to search the local space of a given solution path in order to find "short cuts" which allow us to improve our solution. In this thesis, we provide an implementation and systematic evaluation of this algorithm on the standard IPC benchmarks. Our results show that ITSA* also successfully works in the planning area.

In action planning, greedy best-first search (GBFS) is one of the standard techniques if suboptimal plans are accepted. GBFS uses a heuristic function to guide the search towards a goal state. To achieve generality, in domain-independant planning the heuristic function is generated automatically. A well-known problem of GBFS are search plateaus, i.e., regions in the search space where all states have equal heuristic values. In such regions, heuristic search can degenerate to uninformed search. Hence, techniques to escape from such plateaus are desired to improve the efficiency of the search. A recent approach to avoid plateaus is based on diverse best-first search (DBFS) proposed by Imai and Kishimoto. However, this approach relies on several parameters. This thesis presents an implementation of DBFS into the Fast Downward planner. Furthermore, this thesis presents a systematic evaluation of DBFS for several parameter settings, leading to a better understanding of the impact of the parameter choices to the search performance.

Risk is a popular board game where players conquer each other's countries. In this project, I created an AI that plays Risk and is capable of learning. For each decision it makes, it performs a simple search one step ahead, looking at the outcomes of all possible moves it could make, and picks the most beneficial. It judges the desirability of outcomes by a series of parameters, which are modified after each game using the TD(λ)-Algorithm, allowing the AI to learn.

The Canadian Traveler's Problem ( ctp ) is a path finding problem where due to unfavorable weather, some of the roads are impassable. At the beginning, the agent does not know which roads are traversable and which are not. Instead, it can observe the status of roads adjacent to its current location. We consider the stochastic variant of the problem, where the blocking status of a connection is randomly defined with known probabilities. The goal is to find a policy which minimizes the expected travel costs of the agent.

We discuss several properties of the stochastic ctp and present an efficient way to calculate state probabilities. With the aid of these theoretical results, we introduce an uninformed algorithm to find optimal policies.

Finding optimal solutions for general search problems is a challenging task. A powerful approach for solving such problems is based on heuristic search with pattern database heuristics. In this thesis, we present a domain specific solver for the TopSpin Puzzle problem. This solver is based on the above-mentioned pattern database approach. We investigate several pattern databases, and evaluate them on problem instances of different size.

Merge-and-shrink abstractions are a popular approach to generate abstraction heuristics for planning. The computation of merge-and-shrink abstractions relies on a merging and a shrinking strategy. A recently investigated shrinking strategy is based on using bisimulations. Bisimulations are guaranteed to produce perfect heuristics. In this thesis, we investigate an efficient algorithm proposed by Dovier et al. for computing coarsest bisimulations. The algorithm, however, cannot directly be applied to planning and needs some adjustments. We show how this algorithm can be reduced to work with planning problems. In particular, we show how an edge labelled state space can be translated to a state labelled one and what other changes are necessary for the algorithm to be usable for planning problems. This includes a custom data structure to fulfil all requirements to meet the worst case complexity. Furthermore, the implementation will be evaluated on planning problems from the International Planning Competitions. We will see that the resulting algorithm can often not compete with the currently implemented algorithm in Fast Downward. We discuss the reasons why this is the case and propose possible solutions to resolve this issue.

In order to understand an algorithm, it is always helpful to have a visualization that shows step for step what the algorithm is doing. Under this presumption this Bachelor project will explain and visualize two AI techniques, Constraint Satisfaction Processing and SAT Backbones, using the game Gnomine as an example.

CSP techniques build up a network of constraints and infer information by propagating through a single or several constraints at a time, reducing the domain of the variables in the constraint(s). SAT Backbone Computations find literals in a propositional formula, which are true in every model of the given formula.

By showing how to apply these algorithms on the problem of solving a Gnomine game I hope to give a better insight on the nature of how the chosen algorithms work.

Planning as heuristic search is a powerful approach to solve domain-independent planning problems. An important class of heuristics is based on abstractions of the original planning task. However, abstraction heuristics usually come with loss in precision. The contribution of this thesis is the investigation of constrained abstraction heuristics in general, and the application of this concept to pattern database and merge and shrink abstractions in particular. The idea is to use a subclass of mutexes which represent sets of variable-value-pairs so that only one of these pairs can be true at any given time, to regain some of the precision which is lost in the abstraction without increasing its size. By removing states and operators in the abstraction which conflict with such a mutex, the abstraction is refined and hence, the corresponding abstraction heuristic can get more informed. We have implemented the refinements of these heuristics in the Fast Downward planner and evaluated the different approaches using standard IPC benchmarks. The results show that the concept of constrained abstraction heuristics can improve planning as heuristic search in terms of time and coverage.

A permutation problem considers the task where an initial order of objects (ie, an initial mapping of objects to locations) must be reordered into a given goal order by using permutation operators. Permutation operators are 1:1 mappings of the objects from their locations to (possibly other) locations. An example for permutation problems are the wellknown Rubik's Cube and TopSpin Puzzle. Permutation problems have been a research area for a while, and several methods for solving such problems have been proposed in the last two centuries. Most of these methods focused on finding optimal solutions, causing an exponential runtime in the worst case.

In this work, we consider an algorithm for solving permutation problems that has been originally proposed by M. Furst, J. Hopcroft and E. Luks in 1980. This algorithm has been introduced on a theoretical level within a proof for "Testing Membership and Determining the Order of a Group", but has not been implemented and evaluated on practical problems so far. In contrast to the other abovementioned solving algorithms, it only finds suboptimal solutions, but is guaranteed to run in polynomial time. The basic idea is to iteratively reach subgoals, and then to let them fix when we go further to reach the next goals. We have implemented this algorithm and evaluated it on different models, as the Pancake Problem and the TopSpin Puzzle .

Pattern databases (Culberson & Schaeffer, 1998) or PDBs, have been proven very effective in creating admissible Heuristics for single-agent search, such as the A*-algorithm. Haslum et. al proposed, a hill-climbing algorithm can be used to construct the PDBs, using the canonical heuristic. A different approach would be to change action-costs in the pattern-related abstractions, in order to obtain the admissible heuristic. This the so called Cost-Partitioning.

The aim of this project was to implement a cost-partitioning inside the hill-climbing algorithm by Haslum, and compare the results with the standard way which uses the canonical heuristic.

UCT ("upper confidence bounds applied to trees") is a state-of-the-art algorithm for acting under uncertainty, e.g. in probabilistic environments. In the last years it has been very successfully applied in numerous contexts, including two-player board games like Go and Mancala and stochastic single-agent optimization problems such as path planning under uncertainty and probabilistic action planning.

In this project the UCT algorithm was implemented, adapted and evaluated for the classical arcade game "Ms Pac-Man". The thesis introduces Ms Pac-Man and the UCT algorithm, discusses some critical design decisions for developing a strong UCT-based algorithm for playing Ms Pac-Man, and experimentally evaluates the implementation.

Select language

GSNS Master Header AI Artificial Intelligence 2020

Artificial Intelligence

Thesis project.

In the final thesis project, the student carries out a research project under the supervision of one of the staff members of the research groups offering the AI programme. The project can be done based at Utrecht University, at a company or research institute, or at a foreign university (see also: ‘ stay abroad - traineeship ’).

Before starting the thesis project students are strongly advised to first attend the thesis information session meeting, which is offered at the start of each teaching period. See course INFOMTIMAI for more info .

When looking for a project, please check the following sources.

  • Konjoin always has a number of AI projects.
  • Jobteaser also has interesting external internships for AI students.

General description

The AI Thesis Project is split into a 14 EC project proposal phase (INFOMAI1) and a 30 EC thesis phase (INFOMAI2). The thesis project takes about 8 months (three periods). The set up phase that is necessary to arrange your project is not counted. 

The thesis project consists of a project idea, a UU graduation supervisor, and a graduation project facilitator. The project facilitator can either be a company or the University. Original ideas from the students are welcome, as long as they are aligned with the research interests and/or proposed projects by the supervisors. 

For a thesis project, the student always needs a supervisor from one of the research groups of the UU offering the AI programme. If the final project is conducted within a company or external institute, both a local supervisor within the company/institute and a supervisor of the AI programme teaching staff monitor and guide the student. 

When can a thesis be started? When all courses are successfully completed, with the exception of Dilemmas of the Scientist (FI-MHPSDL1 and FI-MHPSDL2), for which you can do the second workshop (FI-MHPSDL2) during your theses process. Further exceptions can be given by the AI programme coordinator for students with one pending course. Note that you should start looking for a supervisor and a subject before you have finished all your courses (see “Set Up” below). 

Where do I start? Read the information on the various stages of the thesis project below. If you have any questions not covered here contact the programme coordinator  ( [email protected] ). 

How long does the thesis take? Normally, a thesis project (phase 1 + phase 2) runs for 3 periods/terms (see the schedules). However, holidays, courses or other activities may lead to a thesis projects that takes slightly longer. Please Please see below what to do when your thesis is delayed and you have to apply for a thesis deadline extension (part 1 and/or part 2) 

Previous theses. To get an overview of what an AI thesis looks like, you can consult previous theses online . 

Learning goals. After completing your thesis project, you will:

  • have advanced knowledge about a specific subject within AI
  • be able to findings on a specific subject within the broader, interdisciplinary field of AI
  • be able to independently perform a critical literature study
  • be able to formulate a research question of interest to AI and a plan / method to answer this research question
  • be able to perform scientific research according to a predetermined plan and a standard method within AI
  • be able to report the research findings in the form of a scientific thesis
  • be able to report the research findings by means of an oral presentation

This preliminary step is executed before the official start of Phase 1. The duration largely depends on how quickly a supervisor is found and a topic is agreed upon. This part is excluded from the duration of the thesis project. 

1. Find a project and a supervisor   You can do an external or an internal (UU) project. The following tips might come in handy when looking for a project. 

  • Think about the courses you found interesting and ask the lecturers of these courses if they have/know of any projects.
  • Jobteaser also has interesting external internships for AI students. 

Note that any topic has to be agreed with the UU staff member who will act as a first supervisor. Arrange meetings with staff members to discuss possible options, based on their research interests (look at their webpages, their Google Scholar profile, or ask the Programme Coordinator ( [email protected] ). If unsure about possible topics, please arrange a meeting with the Programme coordinator. Students can also try to arrange a project that fits within an internship with a company. Any project, however, requires a first supervisor from the department who guarantees the scientific quality of the thesis project, so it is advisable to talk to potential supervisors and/or the graduation coordinator before agreeing on an internship. 

2. Define your project  Together with the first supervisor, describe your project's title, problem, aims, and research goals. Come up with a short textual description (about 200 words). Also make clear arrangements with your first supervisor concerning planning, holidays, supervision meetings and so forth. Please make sure you have a clear understanding with your first supervisor regarding deadlines and extra work, holidays etc. to be done during the thesis project. Normally, a thesis project runs for 3 periods/terms, but you can set any reasonable deadline in agreement with your supervisor. Please see below what to do when your thesis is delayed and you have to apply for a thesis deadline extension (part 1 and/or part 2).

3. Ensure adherence to Ethics and Privacy regulations -  Quick Scan From Period 2 of 2022-23, all Master AI thesis projects require ethics and privacy approval. For projects that do not involve human users and data privacy issues this will be a very brief and straightforward process, but you still need to complete an ethics checklist. If you are doing your project with a supervisor in a department that already has ethics approval process in place (such as Cognitive Psychology), then ask the supervisor what you need to do in order to obtain ethics approval. Otherwise, please inform your supervisor that you need to obtain an ethics and privacy approval. Go to the website that contains the ethics checklist and sample Information sheets and consent forms:  https://www.uu.nl/en/research/institute-of-information-and-computing-sciences/ethics-and-privacy . First, download the Word form and discuss how to fill it in with your supervisor. Then fill in the Qualtrics form. Please fill in as the moderator email: [email protected] .

4. Work placement agreement If you conduct a project outside UU, the GSNS Work Placement Agreement (WPA)  should be filled in, and signed by the student, company supervisor, and the Science Research Project Coordinator . Deviations to the standard contract shall be discussed with the Science Research Project Coordinator. 

You need to fill out and upload your WPA with your Research Project application form (see next step) in OSIRIS student.

5. Formalize the start of your Research Project via submitting the Research Project application form  

Use Osiris student  (select 'MyCases', 'Start Case', ‘Research Project GSNS’) to submit your research project application form; if applicable, you will also upload the signed Work Placement Agreement with your application form in OSIRIS. 

Important: in order to apply completely and correctly, you must have discussed the project setup with your intended project supervisor beforehand! We advise you to study the request form previous to discussing it with your supervisor, or fill it out together, to make sure you obtain all of the information required. 

After submitting your application form in OSIRIS, your form will be forwarded to your 1st and 2nd Examiner (supervisors), master’s programme coordinator, the Board of Examiners and Student Affairs for checks and approvals. You may be asked for modifications, should they find any problems with the form. 

Please note. You cannot register yourself in OSIRIS for the relevant research project courses (INFOMAI1 and INFOMAI2). You will be automatically registered for part 1 of the project upon approval of the Research Project Application Form.

Phase 1 - Project proposal

The phase comprises 14 EC (i.e. 10 weeks of full-time work) and is intended for you to do a preliminary study (usually in the form of literature study), and to propose and plan your research. Importantly, this phase will give a go/no-go decision towards Phase-2. You are expected to deliver a research proposal consisting of the following: 

  • A literature study section, summarizing works that are relevant to your research. 
  • Well formulated research question(s). 
  • A plan for the second part of the thesis.

Additionally, depending on the nature of the project, your supervisor may require you to perform some initial research work in Phase-1, either in order to provide a convincing argument towards the prospect and feasibility of your Phase-2, or for efficiency to already do some work of Phase-2, e.g. developing an initial theory or building a first prototype of an algorithm. If such work is required, make an agreement with your supervisor on the scope of this work. 

At the end of Phase-1 the supervisor(s) will make a go/no-go decision. This decision, in terms of pass or not pass, will be entered in Osiris. Phase-1 assessment criteria: 

  • Scientific quality. This concerns the quality of the literature study, the relevance and impact of the research questions, the merit of proposed research method. 
  • Writing skills. This concerns the quality of your writing, use of English, textual structure, and coherence/consistency of your text. 
  • Planning. This concerns the clarity and feasibility of the proposed planning. 
  • The quality of additional work, if such is required.

An  example assessment form  with more detailed criteria is available. Please use this form only as a discussion piece and do not send in paper or scanned forms.

Phase 2 - Thesis

The second part comprises 30EC (i.e. 21 weeks full-time). You will complete (at least) the following items: 

  • Perform and complete your research according to your plan (Phase 1). 
  • Write your thesis that presents your research and its results. 
  • Present and defend your results and conclusion. You are asked to prepare a presentation about your research that is understandable by fellow students. The defence will be 45 minutes long; 30 minutes for your presentation, and 15 minutes for questions. 

Content of the thesis.  In addition to the main text describing the research, the master thesis should at least contain: 

  • a front page, containing: name of the student, name of the supervisors, student number, date, name of the program (master Artificial Intelligence, Utrecht University); 
  • an abstract; 
  • an introduction and a conclusion;  
  • a brief discussion of the relevance of the thesis topic for the field of AI; 
  • a list of references.   

Please discuss the exact requirements for your thesis with your daily supervisor/first examiner at the beginning of your project.  

Phase-2 assessment criteria.  Your thesis is assessed using the following criteria: 

  • Project process (30%). This concerns your ability to work independently, to take initiative, to position your work in a broader context, to adapt to new requirements and developments, and to finish the thesis on time. 
  • Project report (30%). This concerns the ability to clearly formulate problems, to summarize the results, to compare them with related scientific work elsewhere, and to suggest future research lines. This also concerns clear, consistent, and unambiguous use of language in the thesis. The text should give the readers confidence in that you understand the chronology, structure, and logical entities in your own text; and thus know what you write. 
  • Project results (30%). This concerns the level and importance of your results. Are the results publishable as a scientific paper? The difficulty of the problem that you solve also plays an important role, as well as the amount/extent of the work you carry out. These are aspects that are important: the effectiveness of the chosen approach, completeness and preciseness of the literature study, arguments for the choices made, insight in the limitations of the chosen approach, proper interpretation of the results achieved, level of abstraction, convincing argument, proofs or statistical analysis. 
  • Project presentation (10%). The ability to orally present your project and its results clearly and concisely. 

An  example assessment form  with more detailed criteria is available. Please use this form only as a discussion piece and do not send in paper or scanned forms. 

Phase 2 - Wrap up

When approaching the finalization of the thesis (i.e, when the supervisors think so), it is time to wrap up the project and graduate. 

  • Set date for graduation presentation : both supervisors should agree on the date, including the time. 

Arrange (virtual) room for defence : The public defence can take place in Teams. If desired by the candidate and/or the supervisors, you can also defend your thesis in a lecture room on campus, ideally with a livestream or in a hybrid form so that e.g. fellow students or friends can also watch online. You can make a Teams meeting yourself, and send an e-mail to the secretariat ( [email protected] ) to arrange for a suitable room for your presentation. Please make sure to include the time, date, name of the thesis, supervisor, and the number of expected attendees. 

Inform the AI coordinator  ( [email protected] ) about the details of your defence (title, abstract, date, time, room and/or Teams link). The coordinator will announce the defence on Teams and via the mailing list. 

Thesis defence : the student gives a presentation of 30 minutes, followed by a question-and-answer session that typically lasts about 15-20 minutes. Your first and second supervisor will decide on your grade and announce this after your presentation. 

Upload thesis  to Osiris Student:  After the defence, the student must  upload the final version of their thesis through Osiris Student > my cases . 

Archiving and publishing thesis to Thesis Archive You will be asked once more to upload the final version of your thesis through OSIRIS Student, yet, this time this is for archiving and publishing purposes. The Case will not be available by default via OSIRIS Student. You will receive an email as soon as the Case in OSIRIS Student is available to you. More information on thesis archiving and publication can be found here . 

Graduation checks and ceremony

The Student Desk at Student Affairs keeps track of your study progress in Osiris. When Osiris indicates that you have completed all the required elements of your degree your file is forwarded to the Board of Examiners. These checks only occur around the 15th of each month. Therefore, do you wish to graduate by the end of the month, please ensure you have completed all elements of your degree before the 15th of the month so all your credits are registered in OSIRIS. This also includes the uploads of your final thesis.  

The Board of Examiners then checks whether you meet all examination requirements. Following the Board's approval your graduation date will be emailed to you on your UU email account.  

Please DO NOT terminate your enrolment in Studielink until the Student Desk has informed you about the decision of the Board of Examiners and you have received your graduation date. For further information, please check the graduation  page.

What to do when your research project is delayed and a Research Project deadline extension is required?

Please note that the “protocol delay in graduation” applies when a project is delayed. This protocol can be found in appendix 2 of the Education and Examination Regulations .

  • When you are delayed, e.g., due to personal circumstances or due to unforeseen circumstances within the project, it is important that you make an appointment with your study advisor in time (before the final deadline of part 1 and/or part 2).  
  • It is further important that you discuss the delay with your supervisor and set new realistic goals and deadlines (where possible).  
  • Next, you need to apply through OSIRIS Student > ‘MyCases’ > 'Start Case' > ‘Request to the Board of Examiners GSNS’, and then choose the appropriate request type: “Delay of research or thesis project”. It is important that you upload a statement from the study advisor (hence why the importance to speak to your study advisor as soon as possible when a delay occurs) and a copy of an email in which the supervisors support the request for a deadline extension. You further need to include a proposed new deadline and short statement support your request.

Follow Utrecht University

Utrecht University Heidelberglaan 8 3584 CS Utrecht The Netherlands Tel. +31 (0)30 253 35 50

  • Press Enter to activate screen reader mode.

ETH AI Center  

Semester and thesis projects.

The ETH AI Center offers a wide range of semester and thesis projects for students at ETH Zurich, as well as other universities. Please see the list below for projects that are currently available.

Are you a student? Check out our Semester and Thesis projects below!

ETH Zurich uses SiROP to publish and search scientific projects. For more information visit sirop.org call_made .

Lifelike Agility on ANYmal by Learning from Animals

master thesis ai

The remarkable agility of animals, characterized by their rapid, fluid movements and precise interaction with their environment, serves as an inspiration for advancements in legged robotics. Recent progress in the field has underscored the potential of learning-based methods for robot control. These methods streamline the development process by optimizing control mechanisms directly from sensory inputs to actuator outputs, often employing deep reinforcement learning (RL) algorithms. By training in simulated environments, these algorithms can develop locomotion skills that are subsequently transferred to physical robots. Although this approach has led to significant achievements in achieving robust locomotion, mimicking the wide range of agile capabilities observed in animals remains a significant challenge. Traditionally, manually crafted controllers have succeeded in replicating complex behaviors, but their development is labor-intensive and demands a high level of expertise in each specific skill. Reinforcement learning offers a promising alternative by potentially reducing the manual labor involved in controller development. However, crafting learning objectives that lead to the desired behaviors in robots also requires considerable expertise, specific to each skill. Show details add remove

learning from demonstrations, imitation learning, reinforcement learning

Master Thesis

Description

Contact details, more information.

Open this project...  call_made

Published since: 2024-03-25

Organization ETH Competence Center - ETH AI Center

Hosts Li Chenhao , Li Chenhao , Klemm Victor

Topics Information, Computing and Communication Sciences

Learning Real-time Human Motion Tracking on a Humanoid Robot

Humanoid robots, designed to mimic the structure and behavior of humans, have seen significant advancements in kinematics, dynamics, and control systems. Teleoperation of humanoid robots involves complex control strategies to manage bipedal locomotion, balance, and interaction with environments. Research in this area has focused on developing robots that can perform tasks in environments designed for humans, from simple object manipulation to navigating complex terrains. Reinforcement learning has emerged as a powerful method for enabling robots to learn from interactions with their environment, improving their performance over time without explicit programming for every possible scenario. In the context of humanoid robotics and teleoperation, RL can be used to optimize control policies, adapt to new tasks, and improve the efficiency and safety of human-robot interactions. Key challenges include the high dimensionality of the action space, the need for safe exploration, and the transfer of learned skills across different tasks and environments. Integrating human motion tracking with reinforcement learning on humanoid robots represents a cutting-edge area of research. This approach involves using human motion data as input to train RL models, enabling the robot to learn more natural and human-like movements. The goal is to develop systems that can not only replicate human actions in real-time but also adapt and improve their responses over time through learning. Challenges in this area include ensuring real-time performance, dealing with the variability of human motion, and maintaining stability and safety of the humanoid robot. Show details add remove

real-time, humanoid, reinforcement learning, representation learning

Hosts He Junzhe , Li Chenhao , Li Chenhao

Continuous Skill Learning with Fourier Latent Dynamics

master thesis ai

In recent years, advancements in reinforcement learning have achieved remarkable success in teaching robots discrete motor skills. However, this process often involves intricate reward structuring and extensive hyperparameter adjustments for each new skill, making it a time-consuming and complex endeavor. This project proposes the development of a skill generator operating within a continuous latent space. This innovative approach contrasts with the discrete skill learning methods currently prevalent in the field. By leveraging a continuous latent space, the skill generator aims to produce a diverse range of skills without the need for individualized reward designs and hyperparameter configurations for each skill. This method not only simplifies the skill generation process but also promises to enhance the adaptability and efficiency of skill learning in robotics. Show details add remove

representation learning, periodic autoencoders, learning from demonstrations, policy modulating trajectory generators

Hosts Li Chenhao , Rudin Nikita

Topics Information, Computing and Communication Sciences , Engineering and Technology

Universal Humanoid Motion Representations for Expressive Learning-based Control

Recent advances in physically simulated humanoids have broadened their application spectrum, including animation, gaming, augmented and virtual reality (AR/VR), and robotics, showcasing significant enhancements in both performance and practicality. With the advent of motion capture (MoCap) technology and reinforcement learning (RL) techniques, these simulated humanoids are capable of replicating extensive human motion datasets, executing complex animations, and following intricate motion patterns using minimal sensor input. Nevertheless, generating such detailed and naturalistic motions requires meticulous motion data curation and the development of new physics-based policies from the ground up—a process that is not only labor-intensive but also fraught with challenges related to reward system design, dataset curation, and the learning algorithm, which can result in unnatural motions. To circumvent these challenges, researchers have explored the use of latent spaces or skill embeddings derived from pre-trained motion controllers, facilitating their application in hierarchical RL frameworks. This method involves training a low-level policy to generate a representation space from tasks like motion imitation or adversarial learning, which a high-level policy can then navigate to produce latent codes that represent specific motor actions. This approach promotes the reuse of learned motor skills and efficient action space sampling. However, the effectiveness of this strategy is often limited by the scope of the latent space, which is traditionally based on specialized and relatively narrow motion datasets, thus limiting the range of achievable behaviors. An alternative strategy involves employing a low-level controller as a motion imitator, using full-body kinematic motions as high-level control signals. This method is particularly prevalent in motion tracking applications, where supervised learning techniques are applied to paired input data, such as video and kinematic data. For generative tasks without paired data, RL becomes necessary, although kinematic motion presents challenges as a sampling space due to its high dimensionality and the absence of physical constraints. This necessitates the use of kinematic motion latent spaces for generative tasks and highlights the limitations of using purely kinematic signals for tasks requiring interaction with the environment or other agents, where understanding of interaction dynamics is crucial. We would like to extend the idea of creating a low-level controller as a motion imitator to full-body motions from real-time expressive kinematic targets. Show details add remove

representation learning, periodic autoencoders

Hosts Li Chenhao , Li Chenhao , Li Chenhao

Humanoid Locomotion Learning and Finetuning from Human Feedback

master thesis ai

In the burgeoning field of deep reinforcement learning (RL), agents autonomously develop complex behaviors through a process of trial and error. Yet, the application of RL across various domains faces notable hurdles, particularly in devising appropriate reward functions. Traditional approaches often resort to sparse rewards for simplicity, though these prove inadequate for training efficient agents. Consequently, real-world applications may necessitate elaborate setups, such as employing accelerometers for door interaction detection, thermal imaging for action recognition, or motion capture systems for precise object tracking. Despite these advanced solutions, crafting an ideal reward function remains challenging due to the propensity of RL algorithms to exploit the reward system in unforeseen ways. Agents might fulfill objectives in unexpected manners, highlighting the complexity of encoding desired behaviors, like adherence to social norms, into a reward function. An alternative strategy, imitation learning, circumvents the intricacies of reward engineering by having the agent learn through the emulation of expert behavior. However, acquiring a sufficient number of high-quality demonstrations for this purpose is often impractically costly. Humans, in contrast, learn with remarkable autonomy, benefiting from intermittent guidance from educators who provide tailored feedback based on the learner's progress. This interactive learning model holds promise for artificial agents, offering a customized learning trajectory that mitigates reward exploitation without extensive reward function engineering. The challenge lies in ensuring the feedback process is both manageable for humans and rich enough to be effective. Despite its potential, the implementation of human-in-the-loop (HiL) RL remains limited in practice. Our research endeavors to significantly lessen the human labor involved in HiL learning, leveraging both unsupervised pre-training and preference-based learning to enhance agent development with minimal human intervention. Show details add remove

reinforcement learning from human feedback, preference learning

Deep Learning and Data Collection in Speech Recognition for Individuals with Complex Congenital Disorders

master thesis ai

Complex congenital disorders often result in speech and motor skill impairments, posing communication challenges. Existing non-English speech recognition tools struggle with non-standard speech patterns, compounded by a lack of large training datasets. This project aims to create a personalized framework for training German speech recognition models, catering to the unique needs of individuals with congenital disorders. You will learn to collect data and apply machine learning or deep learning models. Show details add remove

Machine Learning, Deep Learning, Speech recognition, Natural Language Processing, Speech Recognition

Published since: 2024-03-12

Hosts Vo Anh

Topics Information, Computing and Communication Sciences , Engineering and Technology , Behavioural and Cognitive Sciences

Towards AI Safety: Adversarial Attack & Defense on Neural Controllers

master thesis ai

The project is collaborating between SRI and RSL/CRL lab and aims to investigate the weakness of the neural controller based on the state-of-the-art [3] attacking method. Show details add remove

Adversarial attack; safe AI; Reinforcement learning

Semester Project , Master Thesis

PLEASE LOG IN TO SEE DESCRIPTION

Published since: 2024-03-06 , Earliest start: 2024-03-06 , Latest end: 2024-09-30

Applications limited to ETH Zurich , EPFL - Ecole Polytechnique Fédérale de Lausanne

Organization Robotic Systems Lab

Hosts Shi Fan , Shi Fan , Shi Fan

Learning Diverse Adversaries to Black-box Learning-based Controller for Quadruped Robots

The project aims to leverage the latest unsupervised skill discovery techniques to validate the state-of-the-art black-box learning-based controllers in diverse ways. Show details add remove

Diversity in RL, Trustworthy AI

Published since: 2024-03-02 , Earliest start: 2024-03-02 , Latest end: 2024-08-28

Applications limited to ETH Zurich , [nothing]

Towards interpretable learning pipeline: A visual-assisted workflow for locomotion learning

master thesis ai

Current reinforcement learning (RL)-based locomotion controllers have shown promising performance. However we are still not clear about what is learned during the training process. In this project, we investigate the proper metrics and visualisation techniques to interactively steer the locomotion learning tasks. Show details add remove

Reinforcement learning; visualization; interpretable AI

Published since: 2024-02-28 , Earliest start: 2024-02-26 , Latest end: 2024-08-26

Hosts Zhang Xiaoyu , Shi Fan , Wang April , Shi Fan , Shi Fan

Topics Engineering and Technology

Misestimation of CT-perfusion output in acute stroke due to attenuation curve truncation

master thesis ai

In this master's thesis project, we are looking for a candidate to apply machine learning techniques to correct and predict signals of incomplete CT perfusion imaging for ischemic stroke. We hope to use machine learning techniques to de-noise and correct for the truncation in CT perfusion signals. In particular, we aim to infer the true attenuation curve after the truncation time-point. Show details add remove

machine learning; CT perfusion imaging; ischemic stroke; contrast-media attenuation time-curves;

Published since: 2024-02-22 , Earliest start: 2024-06-01

Organization Bjoern Menze

Hosts Davoudi Neda , Yang Kaiyuan

Topics Medical and Health Sciences , Information, Computing and Communication Sciences

Fast Deformable Mesh Tracking from Occluded Point Clouds

Despite the recent advances in point cloud fusion and mesh reconstruction, real-time shape reconstruction for complex and deformable objects is still an open problem in the field of robotics. While state-of-the-art works in computer graphics achieve good reconstruction accuracy with simulated point clouds, they are not time efficient for robotic applications and they have not been validated on real-world point clouds obtained via RGB-D cameras. This project will aim at making prior work from the lab robust to such noise/dropped points. Show details add remove

3D Vision, Robotic Perception, Mesh Reconstruction, Point Cloud, Data Augmentation

Master Thesis , ETH Zurich (ETHZ)

Published since: 2024-01-16

Hosts Zheng Hehui

Poke it: Towards deformable object manipulation using poking strategies.

Despite recent advances in object manipulation using real-world data [1], [2], deformable object manipulation remains an unexplored field as most of the efforts have mainly focused their attention on rigid objects. Other approaches, leveraging simulation and learning [3], [4], have shown success in grasping tasks for deformable objects, however, their applicability to manipulation tasks has not been validated and they rely on accurate estimation of the material properties to bridge the sim-to-real gap. The core of this project is deformable object manipulation and material estimation based on force feedback during interaction. Show details add remove

Robotic Manipulation, Deformable Object Manipulation, Material Learning, Reinforcement Learning, Imitation Learning

Parametrized Shape Optimization using Surrogate Fluid Models

Fast and efficient structure optimization based on parametrized shapes in surrogate fluid simulation environment. Show details add remove

surrogate modeling, deep learning, fluid simulation, optimization

Semester Project , Bachelor Thesis , Master Thesis

Published since: 2024-01-15 , Earliest start: 2023-09-01 , Latest end: 2024-12-31

Hosts Katzschmann Robert, Prof. Dr. , Michelis Mike

Topics Information, Computing and Communication Sciences , Engineering and Technology , Physics

Iterative Optimization for 3D Computational Soft Swimmer Design

master thesis ai

Extending an iterative surrogate model optimization framework to include soft bodies with fluid-structure interaction. Design optimization will then be shown in 2D and 3D on passive soft swimmers. Show details add remove

Topics Mathematical Sciences , Information, Computing and Communication Sciences , Engineering and Technology

Continual Learning and Neural Networks’ Scaling Limit(s)

master thesis ai

In this project, we aim to study the effect of the network’s architecture in continual learning, with a specific focus on the effect of scaling it to large width and depth, and their interplay with other architectural components such as residual connections. Show details add remove

continual learning, network scaling limits, kernels

Published since: 2023-12-05 , Earliest start: 2023-12-05

Hosts Lanzillotta Giulia

master thesis ai

  • Onsite training

3,000,000+ delegates

15,000+ clients

1,000+ locations

  • KnowledgePass
  • Log a ticket

01344203999 Available 24/7

12 Best Artificial Intelligence Topics for Research in 2024

Explore the "12 Best Artificial Intelligence Topics for Research in 2024." Dive into the top AI research areas, including Natural Language Processing, Computer Vision, Reinforcement Learning, Explainable AI (XAI), AI in Healthcare, Autonomous Vehicles, and AI Ethics and Bias. Stay ahead of the curve and make informed choices for your AI research endeavours.

stars

Exclusive 40% OFF

Training Outcomes Within Your Budget!

We ensure quality, budget-alignment, and timely delivery by our expert instructors.

Share this Resource

  • AI Tools in Performance Marketing Training
  • Deep Learning Course
  • Natural Language Processing (NLP) Fundamentals with Python
  • Machine Learning Course
  • Duet AI for Workspace Training

course

Table of Contents  

1) Top Artificial Intelligence Topics for Research 

     a) Natural Language Processing 

     b) Computer vision 

     c) Reinforcement Learning 

     d) Explainable AI (XAI) 

     e) Generative Adversarial Networks (GANs) 

     f) Robotics and AI 

     g) AI in healthcare 

     h) AI for social good 

     i) Autonomous vehicles 

     j) AI ethics and bias 

2) Conclusion 

Top Artificial Intelligence Topics for Research   

This section of the blog will expand on some of the best Artificial Intelligence Topics for research.

Top Artificial Intelligence Topics for Research

Natural Language Processing   

Natural Language Processing (NLP) is centred around empowering machines to comprehend, interpret, and even generate human language. Within this domain, three distinctive research avenues beckon: 

1) Sentiment analysis: This entails the study of methodologies to decipher and discern emotions encapsulated within textual content. Understanding sentiments is pivotal in applications ranging from brand perception analysis to social media insights. 

2) Language generation: Generating coherent and contextually apt text is an ongoing pursuit. Investigating mechanisms that allow machines to produce human-like narratives and responses holds immense potential across sectors. 

3) Question answering systems: Constructing systems that can grasp the nuances of natural language questions and provide accurate, coherent responses is a cornerstone of NLP research. This facet has implications for knowledge dissemination, customer support, and more. 

Computer Vision   

Computer Vision, a discipline that bestows machines with the ability to interpret visual data, is replete with intriguing avenues for research: 

1) Object detection and tracking: The development of algorithms capable of identifying and tracking objects within images and videos finds relevance in surveillance, automotive safety, and beyond. 

2) Image captioning: Bridging the gap between visual and textual comprehension, this research area focuses on generating descriptive captions for images, catering to visually impaired individuals and enhancing multimedia indexing. 

3) Facial recognition: Advancements in facial recognition technology hold implications for security, personalisation, and accessibility, necessitating ongoing research into accuracy and ethical considerations. 

Reinforcement Learning   

Reinforcement Learning revolves around training agents to make sequential decisions in order to maximise rewards. Within this realm, three prominent Artificial Intelligence Topics emerge: 

1) Autonomous agents: Crafting AI agents that exhibit decision-making prowess in dynamic environments paves the way for applications like autonomous robotics and adaptive systems. 

2) Deep Q-Networks (DQN): Deep Q-Networks, a class of reinforcement learning algorithms, remain under active research for refining value-based decision-making in complex scenarios. 

3) Policy gradient methods: These methods, aiming to optimise policies directly, play a crucial role in fine-tuning decision-making processes across domains like gaming, finance, and robotics.  

Introduction To Artificial Intelligence Training

Explainable AI (XAI)   

The pursuit of Explainable AI seeks to demystify the decision-making processes of AI systems. This area comprises Artificial Intelligence Topics such as: 

1) Model interpretability: Unravelling the inner workings of complex models to elucidate the factors influencing their outputs, thus fostering transparency and accountability. 

2) Visualising neural networks: Transforming abstract neural network structures into visual representations aids in comprehending their functionality and behaviour. 

3) Rule-based systems: Augmenting AI decision-making with interpretable, rule-based systems holds promise in domains requiring logical explanations for actions taken. 

Generative Adversarial Networks (GANs)   

The captivating world of Generative Adversarial Networks (GANs) unfolds through the interplay of generator and discriminator networks, birthing remarkable research avenues: 

1) Image generation: Crafting realistic images from random noise showcases the creative potential of GANs, with applications spanning art, design, and data augmentation. 

2) Style transfer: Enabling the transfer of artistic styles between images, merging creativity and technology to yield visually captivating results. 

3) Anomaly detection: GANs find utility in identifying anomalies within datasets, bolstering fraud detection, quality control, and anomaly-sensitive industries. 

Robotics and AI   

The synergy between Robotics and AI is a fertile ground for exploration, with Artificial Intelligence Topics such as: 

1) Human-robot collaboration: Research in this arena strives to establish harmonious collaboration between humans and robots, augmenting industry productivity and efficiency. 

2) Robot learning: By enabling robots to learn and adapt from their experiences, Researchers foster robots' autonomy and the ability to handle diverse tasks. 

3) Ethical considerations: Delving into the ethical implications surrounding AI-powered robots helps establish responsible guidelines for their deployment. 

AI in healthcare   

AI presents a transformative potential within healthcare, spurring research into: 

1) Medical diagnosis: AI aids in accurately diagnosing medical conditions, revolutionising early detection and patient care. 

2) Drug discovery: Leveraging AI for drug discovery expedites the identification of potential candidates, accelerating the development of new treatments. 

3) Personalised treatment: Tailoring medical interventions to individual patient profiles enhances treatment outcomes and patient well-being. 

AI for social good   

Harnessing the prowess of AI for Social Good entails addressing pressing global challenges: 

1) Environmental monitoring: AI-powered solutions facilitate real-time monitoring of ecological changes, supporting conservation and sustainable practices. 

2) Disaster response: Research in this area bolsters disaster response efforts by employing AI to analyse data and optimise resource allocation. 

3) Poverty alleviation: Researchers contribute to humanitarian efforts and socioeconomic equality by devising AI solutions to tackle poverty. 

Unlock the potential of Artificial Intelligence for effective Project Management with our Artificial Intelligence (AI) for Project Managers Course . Sign up now!  

Autonomous vehicles   

Autonomous Vehicles represent a realm brimming with potential and complexities, necessitating research in Artificial Intelligence Topics such as: 

1) Sensor fusion: Integrating data from diverse sensors enhances perception accuracy, which is essential for safe autonomous navigation. 

2) Path planning: Developing advanced algorithms for path planning ensures optimal routes while adhering to safety protocols. 

3) Safety and ethics: Ethical considerations, such as programming vehicles to make difficult decisions in potential accident scenarios, require meticulous research and deliberation. 

AI ethics and bias   

Ethical underpinnings in AI drive research efforts in these directions: 

1) Fairness in AI: Ensuring AI systems remain impartial and unbiased across diverse demographic groups. 

2) Bias detection and mitigation: Identifying and rectifying biases present within AI models guarantees equitable outcomes. 

3) Ethical decision-making: Developing frameworks that imbue AI with ethical decision-making capabilities aligns technology with societal values. 

Future of AI  

The vanguard of AI beckons Researchers to explore these horizons: 

1) Artificial General Intelligence (AGI): Speculating on the potential emergence of AI systems capable of emulating human-like intelligence opens dialogues on the implications and challenges. 

2) AI and creativity: Probing the interface between AI and creative domains, such as art and music, unveils the coalescence of human ingenuity and technological prowess. 

3) Ethical and regulatory challenges: Researching the ethical dilemmas and regulatory frameworks underpinning AI's evolution fortifies responsible innovation. 

AI and education   

The intersection of AI and Education opens doors to innovative learning paradigms: 

1) Personalised learning: Developing AI systems that adapt educational content to individual learning styles and paces. 

2) Intelligent tutoring systems: Creating AI-driven tutoring systems that provide targeted support to students. 

3) Educational data mining: Applying AI to analyse educational data for insights into learning patterns and trends. 

Unleash the full potential of AI with our comprehensive Introduction to Artificial Intelligence Training . Join now!  

Conclusion  

The domain of AI is ever-expanding, rich with intriguing topics about Artificial Intelligence that beckon Researchers to explore, question, and innovate. Through the pursuit of these twelve diverse Artificial Intelligence Topics, we pave the way for not only technological advancement but also a deeper understanding of the societal impact of AI. By delving into these realms, Researchers stand poised to shape the trajectory of AI, ensuring it remains a force for progress, empowerment, and positive transformation in our world. 

Unlock your full potential with our extensive Personal Development Training Courses. Join today!  

Frequently Asked Questions

Upcoming data, analytics & ai resources batches & dates.

Fri 26th Apr 2024

Fri 2nd Aug 2024

Fri 15th Nov 2024

Get A Quote

WHO WILL BE FUNDING THE COURSE?

My employer

By submitting your details you agree to be contacted in order to respond to your enquiry

  • Business Analysis
  • Lean Six Sigma Certification

Share this course

Our biggest spring sale.

red-star

We cannot process your enquiry without contacting you, please tick to confirm your consent to us for contacting you about your enquiry.

By submitting your details you agree to be contacted in order to respond to your enquiry.

We may not have the course you’re looking for. If you enquire or give us a call on 01344203999 and speak to our training experts, we may still be able to help with your training requirements.

Or select from our popular topics

  • ITIL® Certification
  • Scrum Certification
  • Change Management Certification
  • Business Analysis Courses
  • Microsoft Azure Certification
  • Microsoft Excel & Certification Course
  • Microsoft Project
  • Explore more courses

Press esc to close

Fill out your  contact details  below and our training experts will be in touch.

Fill out your   contact details   below

Thank you for your enquiry!

One of our training experts will be in touch shortly to go over your training requirements.

Back to Course Information

Fill out your contact details below so we can get in touch with you regarding your training requirements.

* WHO WILL BE FUNDING THE COURSE?

Preferred Contact Method

No preference

Back to course information

Fill out your  training details  below

Fill out your training details below so we have a better idea of what your training requirements are.

HOW MANY DELEGATES NEED TRAINING?

HOW DO YOU WANT THE COURSE DELIVERED?

Online Instructor-led

Online Self-paced

WHEN WOULD YOU LIKE TO TAKE THIS COURSE?

Next 2 - 4 months

WHAT IS YOUR REASON FOR ENQUIRING?

Looking for some information

Looking for a discount

I want to book but have questions

One of our training experts will be in touch shortly to go overy your training requirements.

Your privacy & cookies!

Like many websites we use cookies. We care about your data and experience, so to give you the best possible experience using our site, we store a very limited amount of your data. Continuing to use this site or clicking “Accept & close” means that you agree to our use of cookies. Learn more about our privacy policy and cookie policy cookie policy .

We use cookies that are essential for our site to work. Please visit our cookie policy for more information. To accept all cookies click 'Accept & close'.

FIU Libraries Logo

  •   LibGuides
  •   A-Z List
  •   Help

Artificial Intelligence

  • Background Information
  • Getting started
  • Browse Journals
  • Dissertations & Theses
  • Datasets and Repositories
  • Research Data Management 101
  • Scientific Writing
  • Find Videos
  • Related Topics
  • Quick Links
  • Ask Us/Contact Us

FIU dissertations

master thesis ai

Non-FIU dissertations

Many   universities   provide full-text access to their dissertations via a digital repository.  If you know the title of a particular dissertation or thesis, try doing a Google search.  

Aims to be the best possible resource for finding open access graduate theses and dissertations published around the world with metadata from over 800 colleges, universities, and research institutions. Currently, indexes over 1 million theses and dissertations.

This is a discovery service for open access research theses awarded by European universities.

A union catalog of Canadian theses and dissertations, in both electronic and analog formats, is available through the search interface on this portal.

There are currently more than 90 countries and over 1200 institutions represented. CRL has catalog records for over 800,000 foreign doctoral dissertations.

An international collaborative resource, the NDLTD Union Catalog contains more than one million records of electronic theses and dissertations. Use BASE, the VTLS Visualizer or any of the geographically specific search engines noted lower on their webpage.

Indexes doctoral dissertations and masters' theses in all areas of academic research includes international coverage.

ProQuest Dissertations & Theses global

Related Sites

master thesis ai

  • << Previous: Browse Journals
  • Next: Datasets and Repositories >>
  • Last Updated: Apr 4, 2024 8:33 AM
  • URL: https://library.fiu.edu/artificial-intelligence

Information

Fiu libraries floorplans, green library, modesto a. maidique campus, hubert library, biscayne bay campus.

Federal Depository Library Program logo

Directions: Green Library, MMC

Directions: Hubert Library, BBC

Thesis Topics

This list includes topics for potential bachelor or master theses, guided research, projects, seminars, and other activities. Search with Ctrl+F for desired keywords, e.g. ‘machine learning’ or others.

PLEASE NOTE: If you are interested in any of these topics, click the respective supervisor link to send a message with a simple CV, grade sheet, and topic ideas (if any). We will answer shortly.

Of course, your own ideas are always welcome!

Generating images for training Image Super-Resolution models

Type of work:.

  • Guided Research
  • deep learning
  • single image super-resolution
  • syntethic datasets / dataset generation

Description:

Typically, Single Image Super-Resolution (SISR) models train on expressive real images (e.g., DIV2K and/or Flickr2K). This work aims to rethink the need of real images for training SISR models. In other words: do we need real images to learn useful upscaling mappings? For that, the proposed work should investigate different methods for generating artificial datasets that might be suitable for SISR models, see [2]. The resulting models trained on the artifically generated training sets should then be evaluated on real test datasets (Set5, Set14, BSDS100, …) and analyze its outcomes.

  • [1] Hitchhiker’s Guide to Super-Resolution: Introduction and Recent Advances
  • [2] Learning to See by Looking at Noise

Machine Learning-based Surrogate Models for Accelerated Flow Simulations

  • Machine Learning
  • Microstructure Property Prediction
  • Surrogate Modeling

Surrogate modeling involves creating a simplified and computationally efficient machine learning model that approximates the behavior of a complex system, enabling faster predictions and analysis. For complex systems such as fluids, their behavior is governed by partial differential equations. By solving these PDEs, one can predict how a fluid behaves in a specific environment and conditions. The computational time and resources needed to solve a PDE system depend on the size of the fluid domain and the complexity of the PDE. In practical applications where multiple environments and conditions are to be studied, it becomes very expensive to generate many solutions to such PDEs. Here, modern machine learning or deep learning-based surrogate models which offer fast inference times in the online phase are of interest.

In this work, the focus will be on developing surrogate models to replace the flow simulations in fiber-reinforced composite materials governed by the Navier-Stokes equation. Using a conventional PDE solver, a dataset of reference solutions was generated for supervised learning. In this thesis, your tasks will include the conceptualization and implementation of different ML architectures suited for this task, training and evaluation of the models on the available dataset. You will start with simple fully connected architectures and later extend it to 3D convolutional architectures. Also of interest is the infusion of the available domain knowledge into the ML models, known as physics-informed machine learning.

By applying ML to fluid applications, you will learn to acquire the right amount of domain specific knowledge and analyze your results together with domain experts from the field.

If you are interested, please send me an email with your Curriculum Vitae (CV), your Transcript of records and a short statement about your background in related topics.

References:

  • Santos, J.E., Xu, D., Jo, H., Landry, C.J., Prodanović, M., Pyrcz, M.J., 2020. PoreFlow-Net: A 3D convolutional neural network to predict fluid flow through porous media. Advances in Water Resources 138, 103539. https://doi.org/10.1016/j.advwatres.2020.103539
  • Kashefi, A., Mukerji, T., 2021. Point-cloud deep learning of porous media for permeability prediction. Physics of Fluids 33, 097109. https://doi.org/10.1063/5.0063904

Segmentation of Shoe Trace Images

  • benchmarking
  • image segmentation
  • keypoint extraction
  • self-attention

Help fight crime with AI! The DFKI and the Artificial Intelligence Transferlab of the State Criminal Police Office (Landskriminalamt) are searching for master candidates eager to apply their knowledge in AI to support crime scene analysis. The student will have the opportunity to visit the Transferlab in Mainz for an in-depth introduction to the topic and full access to DFKI’s computing cluster infrastructure.

General goal: improve identification of specific markers normally present in shoe trace images acquired in crime scenes.

Specific goals:

  • [benchmarking] evaluate existing image segmentation models in the context of shoe trace analysis;
  • [research] propose a segmentation model combining semantics and keypoint information tailored to specific markers present in crime scene photographs;
  • [research] assess model performance on labeled data.
  • [research] definition of limits and requirements for the existing training- and test-data.

Retrieval of Shoe Sole Images

  • graph neural networks
  • image retrieval

General goal: improve retrieval of shoe sole images acquired in laboratory, i.e. under controlled conditions and used as reference by forensics specialists.

  • [benchmarking] evaluate existing image retrieval approaches in the context of shoe trace recognition;
  • [research] propose a graph network architecture based on keypoint information extracted from the images.
  • [research] evaluate performance of proposed model against existing methods.

Sherlock Holmes goes AI - Generative comics art of detective scenes and identikits

  • Bias in image generation models
  • Deep Learning Frameworks
  • Frontend visualization
  • Speech-To-Text, Text-to-Image Models
  • Transformers, Diffusion Models, Hugging Face

Sherlock Holmes is taking the statement of the witness. The witness is describing the appearance of the perpetrator and the forensic setting they still remember. Your task as the AI investigator will be to generate a comic sketch of the scene and phantom images of the accused person based on the spoken statement of the witness. For this you will use state-of-the-art transformers and visualize the output in an application. As AI investigator you will detect, qualify and quantify bias in the images which are produced by different generation models you have chosen.

This work is embedded in the DFKI KI4Pol lab together with the law enforcement agencies. The stories are fictional you will not work on true crime.

Requirements:

  • German level B1/2 or equivalent
  • Outstanding academic achievements
  • Motivational cover letter

Generative Adversarial Networks for Agricultural Yield Prediction

  • Deep Learning
  • Generative Adversarial Networks
  • Yield Prediction

Agricultural yield prediction has been an essential research area for many years, as it helps farmers and policymakers to make informed decisions about crop management, resource allocation, and food security. Computer vision and machine learning techniques have shown promising results in predicting crop yield, but there is still room for improvement in the accuracy and precision of these predictions. Generative Adversarial Networks (GANs) are a type of neural network that has shown success in generating realistic images, which can be leveraged for the prediction of agricultural yields.

  • ‘Goodfellow, Ian, et al. “Generative adversarial networks.” Communications of the ACM 63.11 (2020)': 139-144.
  • ‘Z. Xu, J. Du, J. Wang, C. Jiang and Y. Ren, “Satellite Image Prediction Relying on GAN and LSTM Neural Networks,” ICC 2019 - 2019 IEEE International Conference on Communications (ICC), Shanghai, China, 2019, pp. 1-6, doi’: 10.1109/ICC.2019.8761462.
  • ‘Drees, Lukas, et al. “Temporal prediction and evaluation of brassica growth in the field using conditional generative adversarial networks.” Computers and Electronics in Agriculture 190 (2021)': 106415

Knowledge Graphs für das Immobilienmanagement

  • corporate memory
  • knowledge graph

Das Management von Immobilien ist komplex und umfasst verschiedenste Informationsquellen und -objekte zur Durchführung der Prozesse. Ein Corporate Memory kann hier unterstützen in der Analyse und Abbildung des Informationsraums um Wissensdienste zu ermöglichen. Aufgabe ist es, eine Ontologie für das Immobilienmanagement zu entwerfen und beispielhaft ein Szenario zu entwickeln. Für die Materialien und Anwendungspartner sind gute Deutschkenntnisse erforderlich.

Fault and Efficiency Prediction in High Performance Computing

  • Master Thesis
  • event data modelling
  • survival modelling
  • time series

High use of resources are thought to be an indirect cause of failures in large cluster systems, but little work has systematically investigated the role of high resource usage on system failures, largely due to the lack of a comprehensive resource monitoring tool which resolves resource use by job and node. This project studies log data of the DFKI Kaiserslautern high performance cluster to consider the predictability of adverse events (node failure, GPU freeze), energy usage and identify the most relevant data within. The second supervisor for this work is Joachim Folz.

Data is available via Prometheus -compatible system:

  • Node exporter
  • DCGM exporter
  • Slurm exporter
  • Linking Resource Usage Anomalies with System Failures from Cluster Log Data
  • Deep Survival Models

Feel free to reach out if the topic sounds interesting or if you have ideas related to this work. We can then brainstorm a specific research question together. Link to my personal website.

Construction & Application of Enterprise Knowledge Graphs in the E-Invoicing Domain

  • Guided Research Project
  • knowledge graphs
  • knowledge services
  • linked data
  • semantic web

In recent years knowledge graphs received a lot of attention as well in industry as in science. Knowledge graphs consist of entities and relationships between them and allow integrating new knowledge arbitrarily. Famous instances in industry are knowledge graphs by Microsoft, Google, Facebook or IBM. But beyond these ones, knowledge graphs are also adopted in more domain specific scenarios such as in e-Procurement, e-Invoicing and purchase-to-pay processes. The objective in theses and projects is to explore particular aspects of constructing and/or applying knowledge graphs in the domain of purchase-to-pay processes and e-Invoicing.

Anomaly detection in time-series

  • explainability

Working on deep neural networks for making the time-series anomaly detection process more robust. An important aspect of this process is explainability of the decision taken by a network.

Time Series Forecasting Using transformer Networks

  • time series forecasting
  • transformer networks

Transformer networks have emerged as competent architecture for modeling sequences. This research will primarily focus on using transformer networks for forecasting time series (multivariate/ univariate) and may also involve fusing knowledge into the machine learning architecture.

On This Page

Machine Learning - CMU

PhD Dissertations

PhD Dissertations

[all are .pdf files].

Learning Models that Match Jacob Tyo, 2024

Improving Human Integration across the Machine Learning Pipeline Charvi Rastogi, 2024

Reliable and Practical Machine Learning for Dynamic Healthcare Settings Helen Zhou, 2023

Automatic customization of large-scale spiking network models to neuronal population activity (unavailable) Shenghao Wu, 2023

Estimation of BVk functions from scattered data (unavailable) Addison J. Hu, 2023

Rethinking object categorization in computer vision (unavailable) Jayanth Koushik, 2023

Advances in Statistical Gene Networks Jinjin Tian, 2023 Post-hoc calibration without distributional assumptions Chirag Gupta, 2023

The Role of Noise, Proxies, and Dynamics in Algorithmic Fairness Nil-Jana Akpinar, 2023

Collaborative learning by leveraging siloed data Sebastian Caldas, 2023

Modeling Epidemiological Time Series Aaron Rumack, 2023

Human-Centered Machine Learning: A Statistical and Algorithmic Perspective Leqi Liu, 2023

Uncertainty Quantification under Distribution Shifts Aleksandr Podkopaev, 2023

Probabilistic Reinforcement Learning: Using Data to Define Desired Outcomes, and Inferring How to Get There Benjamin Eysenbach, 2023

Comparing Forecasters and Abstaining Classifiers Yo Joong Choe, 2023

Using Task Driven Methods to Uncover Representations of Human Vision and Semantics Aria Yuan Wang, 2023

Data-driven Decisions - An Anomaly Detection Perspective Shubhranshu Shekhar, 2023

Applied Mathematics of the Future Kin G. Olivares, 2023

METHODS AND APPLICATIONS OF EXPLAINABLE MACHINE LEARNING Joon Sik Kim, 2023

NEURAL REASONING FOR QUESTION ANSWERING Haitian Sun, 2023

Principled Machine Learning for Societally Consequential Decision Making Amanda Coston, 2023

Long term brain dynamics extend cognitive neuroscience to timescales relevant for health and physiology Maxwell B. Wang, 2023

Long term brain dynamics extend cognitive neuroscience to timescales relevant for health and physiology Darby M. Losey, 2023

Calibrated Conditional Density Models and Predictive Inference via Local Diagnostics David Zhao, 2023

Towards an Application-based Pipeline for Explainability Gregory Plumb, 2022

Objective Criteria for Explainable Machine Learning Chih-Kuan Yeh, 2022

Making Scientific Peer Review Scientific Ivan Stelmakh, 2022

Facets of regularization in high-dimensional learning: Cross-validation, risk monotonization, and model complexity Pratik Patil, 2022

Active Robot Perception using Programmable Light Curtains Siddharth Ancha, 2022

Strategies for Black-Box and Multi-Objective Optimization Biswajit Paria, 2022

Unifying State and Policy-Level Explanations for Reinforcement Learning Nicholay Topin, 2022

Sensor Fusion Frameworks for Nowcasting Maria Jahja, 2022

Equilibrium Approaches to Modern Deep Learning Shaojie Bai, 2022

Towards General Natural Language Understanding with Probabilistic Worldbuilding Abulhair Saparov, 2022

Applications of Point Process Modeling to Spiking Neurons (Unavailable) Yu Chen, 2021

Neural variability: structure, sources, control, and data augmentation Akash Umakantha, 2021

Structure and time course of neural population activity during learning Jay Hennig, 2021

Cross-view Learning with Limited Supervision Yao-Hung Hubert Tsai, 2021

Meta Reinforcement Learning through Memory Emilio Parisotto, 2021

Learning Embodied Agents with Scalably-Supervised Reinforcement Learning Lisa Lee, 2021

Learning to Predict and Make Decisions under Distribution Shift Yifan Wu, 2021

Statistical Game Theory Arun Sai Suggala, 2021

Towards Knowledge-capable AI: Agents that See, Speak, Act and Know Kenneth Marino, 2021

Learning and Reasoning with Fast Semidefinite Programming and Mixing Methods Po-Wei Wang, 2021

Bridging Language in Machines with Language in the Brain Mariya Toneva, 2021

Curriculum Learning Otilia Stretcu, 2021

Principles of Learning in Multitask Settings: A Probabilistic Perspective Maruan Al-Shedivat, 2021

Towards Robust and Resilient Machine Learning Adarsh Prasad, 2021

Towards Training AI Agents with All Types of Experiences: A Unified ML Formalism Zhiting Hu, 2021

Building Intelligent Autonomous Navigation Agents Devendra Chaplot, 2021

Learning to See by Moving: Self-supervising 3D Scene Representations for Perception, Control, and Visual Reasoning Hsiao-Yu Fish Tung, 2021

Statistical Astrophysics: From Extrasolar Planets to the Large-scale Structure of the Universe Collin Politsch, 2020

Causal Inference with Complex Data Structures and Non-Standard Effects Kwhangho Kim, 2020

Networks, Point Processes, and Networks of Point Processes Neil Spencer, 2020

Dissecting neural variability using population recordings, network models, and neurofeedback (Unavailable) Ryan Williamson, 2020

Predicting Health and Safety: Essays in Machine Learning for Decision Support in the Public Sector Dylan Fitzpatrick, 2020

Towards a Unified Framework for Learning and Reasoning Han Zhao, 2020

Learning DAGs with Continuous Optimization Xun Zheng, 2020

Machine Learning and Multiagent Preferences Ritesh Noothigattu, 2020

Learning and Decision Making from Diverse Forms of Information Yichong Xu, 2020

Towards Data-Efficient Machine Learning Qizhe Xie, 2020

Change modeling for understanding our world and the counterfactual one(s) William Herlands, 2020

Machine Learning in High-Stakes Settings: Risks and Opportunities Maria De-Arteaga, 2020

Data Decomposition for Constrained Visual Learning Calvin Murdock, 2020

Structured Sparse Regression Methods for Learning from High-Dimensional Genomic Data Micol Marchetti-Bowick, 2020

Towards Efficient Automated Machine Learning Liam Li, 2020

LEARNING COLLECTIONS OF FUNCTIONS Emmanouil Antonios Platanios, 2020

Provable, structured, and efficient methods for robustness of deep networks to adversarial examples Eric Wong , 2020

Reconstructing and Mining Signals: Algorithms and Applications Hyun Ah Song, 2020

Probabilistic Single Cell Lineage Tracing Chieh Lin, 2020

Graphical network modeling of phase coupling in brain activity (unavailable) Josue Orellana, 2019

Strategic Exploration in Reinforcement Learning - New Algorithms and Learning Guarantees Christoph Dann, 2019 Learning Generative Models using Transformations Chun-Liang Li, 2019

Estimating Probability Distributions and their Properties Shashank Singh, 2019

Post-Inference Methods for Scalable Probabilistic Modeling and Sequential Decision Making Willie Neiswanger, 2019

Accelerating Text-as-Data Research in Computational Social Science Dallas Card, 2019

Multi-view Relationships for Analytics and Inference Eric Lei, 2019

Information flow in networks based on nonstationary multivariate neural recordings Natalie Klein, 2019

Competitive Analysis for Machine Learning & Data Science Michael Spece, 2019

The When, Where and Why of Human Memory Retrieval Qiong Zhang, 2019

Towards Effective and Efficient Learning at Scale Adams Wei Yu, 2019

Towards Literate Artificial Intelligence Mrinmaya Sachan, 2019

Learning Gene Networks Underlying Clinical Phenotypes Under SNP Perturbations From Genome-Wide Data Calvin McCarter, 2019

Unified Models for Dynamical Systems Carlton Downey, 2019

Anytime Prediction and Learning for the Balance between Computation and Accuracy Hanzhang Hu, 2019

Statistical and Computational Properties of Some "User-Friendly" Methods for High-Dimensional Estimation Alnur Ali, 2019

Nonparametric Methods with Total Variation Type Regularization Veeranjaneyulu Sadhanala, 2019

New Advances in Sparse Learning, Deep Networks, and Adversarial Learning: Theory and Applications Hongyang Zhang, 2019

Gradient Descent for Non-convex Problems in Modern Machine Learning Simon Shaolei Du, 2019

Selective Data Acquisition in Learning and Decision Making Problems Yining Wang, 2019

Anomaly Detection in Graphs and Time Series: Algorithms and Applications Bryan Hooi, 2019

Neural dynamics and interactions in the human ventral visual pathway Yuanning Li, 2018

Tuning Hyperparameters without Grad Students: Scaling up Bandit Optimisation Kirthevasan Kandasamy, 2018

Teaching Machines to Classify from Natural Language Interactions Shashank Srivastava, 2018

Statistical Inference for Geometric Data Jisu Kim, 2018

Representation Learning @ Scale Manzil Zaheer, 2018

Diversity-promoting and Large-scale Machine Learning for Healthcare Pengtao Xie, 2018

Distribution and Histogram (DIsH) Learning Junier Oliva, 2018

Stress Detection for Keystroke Dynamics Shing-Hon Lau, 2018

Sublinear-Time Learning and Inference for High-Dimensional Models Enxu Yan, 2018

Neural population activity in the visual cortex: Statistical methods and application Benjamin Cowley, 2018

Efficient Methods for Prediction and Control in Partially Observable Environments Ahmed Hefny, 2018

Learning with Staleness Wei Dai, 2018

Statistical Approach for Functionally Validating Transcription Factor Bindings Using Population SNP and Gene Expression Data Jing Xiang, 2017

New Paradigms and Optimality Guarantees in Statistical Learning and Estimation Yu-Xiang Wang, 2017

Dynamic Question Ordering: Obtaining Useful Information While Reducing User Burden Kirstin Early, 2017

New Optimization Methods for Modern Machine Learning Sashank J. Reddi, 2017

Active Search with Complex Actions and Rewards Yifei Ma, 2017

Why Machine Learning Works George D. Montañez , 2017

Source-Space Analyses in MEG/EEG and Applications to Explore Spatio-temporal Neural Dynamics in Human Vision Ying Yang , 2017

Computational Tools for Identification and Analysis of Neuronal Population Activity Pengcheng Zhou, 2016

Expressive Collaborative Music Performance via Machine Learning Gus (Guangyu) Xia, 2016

Supervision Beyond Manual Annotations for Learning Visual Representations Carl Doersch, 2016

Exploring Weakly Labeled Data Across the Noise-Bias Spectrum Robert W. H. Fisher, 2016

Optimizing Optimization: Scalable Convex Programming with Proximal Operators Matt Wytock, 2016

Combining Neural Population Recordings: Theory and Application William Bishop, 2015

Discovering Compact and Informative Structures through Data Partitioning Madalina Fiterau-Brostean, 2015

Machine Learning in Space and Time Seth R. Flaxman, 2015

The Time and Location of Natural Reading Processes in the Brain Leila Wehbe, 2015

Shape-Constrained Estimation in High Dimensions Min Xu, 2015

Spectral Probabilistic Modeling and Applications to Natural Language Processing Ankur Parikh, 2015 Computational and Statistical Advances in Testing and Learning Aaditya Kumar Ramdas, 2015

Corpora and Cognition: The Semantic Composition of Adjectives and Nouns in the Human Brain Alona Fyshe, 2015

Learning Statistical Features of Scene Images Wooyoung Lee, 2014

Towards Scalable Analysis of Images and Videos Bin Zhao, 2014

Statistical Text Analysis for Social Science Brendan T. O'Connor, 2014

Modeling Large Social Networks in Context Qirong Ho, 2014

Semi-Cooperative Learning in Smart Grid Agents Prashant P. Reddy, 2013

On Learning from Collective Data Liang Xiong, 2013

Exploiting Non-sequence Data in Dynamic Model Learning Tzu-Kuo Huang, 2013

Mathematical Theories of Interaction with Oracles Liu Yang, 2013

Short-Sighted Probabilistic Planning Felipe W. Trevizan, 2013

Statistical Models and Algorithms for Studying Hand and Finger Kinematics and their Neural Mechanisms Lucia Castellanos, 2013

Approximation Algorithms and New Models for Clustering and Learning Pranjal Awasthi, 2013

Uncovering Structure in High-Dimensions: Networks and Multi-task Learning Problems Mladen Kolar, 2013

Learning with Sparsity: Structures, Optimization and Applications Xi Chen, 2013

GraphLab: A Distributed Abstraction for Large Scale Machine Learning Yucheng Low, 2013

Graph Structured Normal Means Inference James Sharpnack, 2013 (Joint Statistics & ML PhD)

Probabilistic Models for Collecting, Analyzing, and Modeling Expression Data Hai-Son Phuoc Le, 2013

Learning Large-Scale Conditional Random Fields Joseph K. Bradley, 2013

New Statistical Applications for Differential Privacy Rob Hall, 2013 (Joint Statistics & ML PhD)

Parallel and Distributed Systems for Probabilistic Reasoning Joseph Gonzalez, 2012

Spectral Approaches to Learning Predictive Representations Byron Boots, 2012

Attribute Learning using Joint Human and Machine Computation Edith L. M. Law, 2012

Statistical Methods for Studying Genetic Variation in Populations Suyash Shringarpure, 2012

Data Mining Meets HCI: Making Sense of Large Graphs Duen Horng (Polo) Chau, 2012

Learning with Limited Supervision by Input and Output Coding Yi Zhang, 2012

Target Sequence Clustering Benjamin Shih, 2011

Nonparametric Learning in High Dimensions Han Liu, 2010 (Joint Statistics & ML PhD)

Structural Analysis of Large Networks: Observations and Applications Mary McGlohon, 2010

Modeling Purposeful Adaptive Behavior with the Principle of Maximum Causal Entropy Brian D. Ziebart, 2010

Tractable Algorithms for Proximity Search on Large Graphs Purnamrita Sarkar, 2010

Rare Category Analysis Jingrui He, 2010

Coupled Semi-Supervised Learning Andrew Carlson, 2010

Fast Algorithms for Querying and Mining Large Graphs Hanghang Tong, 2009

Efficient Matrix Models for Relational Learning Ajit Paul Singh, 2009

Exploiting Domain and Task Regularities for Robust Named Entity Recognition Andrew O. Arnold, 2009

Theoretical Foundations of Active Learning Steve Hanneke, 2009

Generalized Learning Factors Analysis: Improving Cognitive Models with Machine Learning Hao Cen, 2009

Detecting Patterns of Anomalies Kaustav Das, 2009

Dynamics of Large Networks Jurij Leskovec, 2008

Computational Methods for Analyzing and Modeling Gene Regulation Dynamics Jason Ernst, 2008

Stacked Graphical Learning Zhenzhen Kou, 2007

Actively Learning Specific Function Properties with Applications to Statistical Inference Brent Bryan, 2007

Approximate Inference, Structure Learning and Feature Estimation in Markov Random Fields Pradeep Ravikumar, 2007

Scalable Graphical Models for Social Networks Anna Goldenberg, 2007

Measure Concentration of Strongly Mixing Processes with Applications Leonid Kontorovich, 2007

Tools for Graph Mining Deepayan Chakrabarti, 2005

Automatic Discovery of Latent Variable Models Ricardo Silva, 2005

master thesis ai

AI masters class, 2021-2022

This is the webpage for AI master students at NTNU 2021-2022. This site contains useful information and resources, free to use by the students.

  • Efficient reading of scientific papers
  • How evaluation guides AI research , Cohen and Howe, AI Magazine, vol. 9, no. 4, 1988
  • Writing Good Software Engineering Research Papers ( PDF ), M. Shaw, Proceedings of the 25th International Conference on Software Engineering, IEEE Computer Society, 2003, pp. 726-736
  • Choosing a Computer Science research problem ( PDF )
  • How to do a structured literature review ( SLR )
  • The master thesis template in PDF
  • The LaTeX files for the master template

Master Thesis Presentation Template

A master thesis is crucial to rounding out your time and knowledge learned in your upper-level education courses. And now you can create an A-worthy thesis in half the time with Beautiful.ai’s master thesis template. 

Our customizable template has all the basics to help you sum up your comprehensive knowledge on the course and prove your skills in the field. Slides like literature review, research methodology, and a strong thesis conclusion will help you stand out to the panel or faculty members. A thoughtful master thesis presentation can help students wrap up their time in the program and apply their findings to their careers. 

Our master thesis template can also help you:

  • Customize your idea or statement for different audiences
  • Organize your argument in a thoughtful way
  • Provide a guide for the panel to read and follow along with

Use our template to create an effective master thesis presentation

A master thesis presentation is crucial to the success of your master’s program – one that requires a concise format, clear layout, and seamless flow. That’s why our template includes everything you need to create an effective presentation. Whether you need to organize your argument in a meaningful way or showcase more resources, you can quickly bring your visions to life with these slides::

Title Slide

Tips to create an impactful master thesis presentation

As you use this template to craft your master thesis presentation, keep these do’s and don’ts in mind:

Condensing hours and hours of research can be daunting. Build an outline or table of contents first, then simply stick to that structure as you create your presentation.

It can be easy to get caught up in your research and findings, but don’t forget to answer critical questions like, ‘Why is this important?’ and ‘What results have you achieved?’

Remember: You aren’t recreating your entire thesis into a visual presentation. Limit the amount of content and data you add to each slide.

Your master thesis presentation is your chance to share all of your hard work. Don’t be afraid to showcase bits of your personality throughout.

More Popular Templates

Trade Show Presentation Template

Trade Show Presentation Template

Learn how Beautiful.ai’s trade show presentation template can help teams take their exhibit to the next level and grab the attention of attendees.

Product Proposal Presentation Template

Product Proposal Presentation Template

Seize opportunities by showcasing your product idea in our product proposal template. Visualize the future of your product.

Weekly Report Presentation

Weekly Report Presentation

Recap the past week’s accomplishments, share current projects, and plan for the week ahead with a weekly report presentation.

Notion Pitch Deck Template

Notion Pitch Deck Template

Head of Sales at hot tech startup Notion uses this winning sales deck to close deals.

Project Retrospective Presentation Template

Project Retrospective Presentation Template

Learn how Beautiful.ai’s project retrospective template can help your team reflect on a completed project and learn from the successes and failures.

Linkedin Pitch Deck

Linkedin Pitch Deck

LinkedIn is the world's largest professional network with over 800 million members in more than 200 countries and territories worldwide. We took a look at LinkedIn’s original pitch deck to look for ways to improve the design.

Author thesis

Powered by CHATin

Smodin's Thesis Generator: The Ultimate Tool for Crafting a Winning Thesis

Generate a thesis statement, or a multi-part thesis in just a click with our Thesis Generator. Quickly find sources for your thesis with our AI Research tool to produce a scholarly quality thesis.

Smodin's Free Thesis Generator & Thesis Statement Writer

If you're struggling to come up with a thesis statement for your research paper, Smodin's free thesis generator can help. Our innovative tool uses advanced AI algorithms to quickly generate thesis statements based on your topic and requirements. With Smodin Thesis generator, you can save time and effort while ensuring your thesis statement is clear, concise, and on-point.

How Smodin's Free Thesis Generator Works

Smodin's free thesis generator uses advanced AI algorithms to quickly and accurately generate thesis statements based on your topic and requirements. To use our tool, simply input your topic or subject, choose the type of paper you're writing, and select the main idea you want to convey in your thesis statement.

How To use Smodin Thesis Statement Writer

Using Smodin's free thesis generator is simple and easy, and can save you valuable time and effort when writing a research paper. With our tool, you can generate high-quality thesis statements quickly and efficiently, giving you more time to focus on analysis and writing. So why not give Smodin's free thesis generator a try today and see how it can benefit your research paper writing process.

Tips for Using Smodin's Free Thesis Generator

To get the most out of Smodin's free thesis generator, input a clear and concise topic or keywords to ensure accurate results. Refine and edit the generated thesis statement to ensure it fits your research paper perfectly, and use it as a starting point for your own research and writing. With these tips in mind, you can streamline the process of generating a thesis statement and produce high-quality research papers that meet your specific requirements and style.

Why Smodin's Free Thesis Generator is the Perfect Tool for Students and Researchers

Smodin's free thesis generator is an essential tool for students and researchers looking to streamline the process of writing a research paper. With advanced AI algorithms, our tool quickly generates clear and concise thesis statements based on your topic and requirements, ensuring your research paper is of the highest quality.

In addition to saving time and effort, using Smodin's free thesis generator ensures your thesis statement is well-crafted and effective. With our tool, you can focus on conducting thorough research and crafting a compelling argument, rather than getting bogged down in the preliminary stages of the writing process.

© 2024 Smodin LLC

Get science-backed answers as you write with Paperpal's Research feature

Paperpal for Students Experience a new era for PhD thesis writing and English editing

Sharpen your academic writing skills and deliver high-quality PhD thesis, dissertation, or essay writing for students

Improve your academic writing skills with Paperpal

Whether you’re a student studying for a doctorate or master’s degree, your lecturer’s priority is enhancing your understanding of a particular research topic or discipline. But what about perfecting the language you use to communicate this knowledge?

Developing the right academic writing skills can be a challenge when you’re struggling with PhD thesis writing or essay writing for students. Some of the most common stumbling blocks include:

Lexical issues and incorrect spelling

Content structure and layout

Grammar and punctuation

Accidental plagiarism

Academic essay writing for students, including doctoral dissertation writing and PhD thesis editing, are basic academic writing skills students are expected to have. Imagine after putting in years of work, how disappointing it would be to be tripped up by avoidable language and grammar issues. We want YOU to succeed. Paperpal, a trusted AI writing assistant, transforms academic writing for students with real-time English editing tips from the first draft itself. Move closer to your goals when you improve your language and grammar, enhance your writing style, and ensure overall clarity with our subject-specific writing and editing recommendations.

Want to increase your chances of success with an error-free manuscript?

Trusted by top global academic societies, publishers, and universities

Paperpal is the best AI writing assistant for students

Develops your essential academic writing skills.

Your dissertation or thesis is probably the longest, most challenging piece of content you’ll write as a PhD student. Whether you are aiming for top marks or looking to publish in a reputed journal, developing strong PhD thesis writing skills will lay the foundation for your academic success. But with detailed research and looming deadlines, we also understand how overwhelming PhD thesis writing and proofreading can be. A 2020 Pearson survey of more than 1,700 students found that at least 33% felt they lacked the ability to spot potential errors in their academic writing and struggled to reach out for support when needed. This is where Paperpal for Word comes in with comprehensive grammar, spelling, punctuation, and readability suggestions, giving you the power to improve and speed up the academic writing process as you write. You also get detailed English writing tips that explain errors and how to fix them, which helps you strengthen your academic writing skills over time.

Paperpal uses cutting-edge machine learning trained on millions of editorial corrections performed

Reduces time on editing and proofreading for students

It is a well-known adage that PhD thesis writing is 25% writing and 75% revision. It’s not enough to know your subject, conduct research, and reach conclusions; how you showcase your research can be the difference between a mediocre and a great paper! When working on your thesis or dissertation, remember first impressions count. Your grammar, spelling, and structure are just as important as your research question and methodology, so it’s critical to get this right. With subject-specific suggestions to improve your academic writing skills, Paperpal can improve and speed up essay writing for students. The thorough language check and detailed editing recommendations helps to polish your document and further reduces the time spent on proofreading for students. Submitting the best version of your paper can minimize feedback during the evaluation stage and even boost your overall score.

Goes beyond a grammar check to give you better results

Finding an online sentence checker or a basic grammar and language tool is easy. What you really need is a smart AI writing assistant that understands your work and what you’re trying to achieve. Paperpal recognizes the importance of academic writing for graduate students and helps you improve your academic writing skills from the all-important first draft itself. Basic English editing is not enough for budding researchers. This is where Paperpal, tailored to academic writing conventions, is the perfect AI writing assistant for students. Imagine the benefits of having key insights, based on millions of pre- and post-edited manuscripts, right at your fingertips! For instance, using Paperpal for Word for your PhD thesis writing will give you the grammar, punctuation, and vocabulary checks most online tools offer. But Paperpal delves even deeper with suggestions on how to rephrase sentences, improve article structure, and other such edits to polish your writing. This not only takes your academic writing skills up a notch, it saves the countless hours you would otherwise spend editing your work.

Paperpal uses cutting-edge machine learning trained on millions of editorial corrections performed

Everyone needs support with academic writing and English editing now and then. Let us help!

Paperpal uses cutting-edge machine learning trained on millions of editorial corrections performed

Paperpal for Word

Paperpal uses cutting-edge machine learning trained on millions of editorial corrections performed

Paperpal for Web

Experience Paperpal in Word and on Web

Paperpal for Word is free and easy to install, just click on the ‘Paperpal for Word’ button to get started. Those who do not want to deal with our Word add-in will like our easy to use online English language editing tool. Paperpal for Web allows you to write, copy or upload your text into the browser and receive instant language and grammar fixes to polish your thesis or dissertation. Students who are ready with a fully completed article, can check their submission readiness with Paperpal for Manuscript. This handy tool allows you to upload the final draft of your article or PhD thesis and download a Word document with all the relevant corrections and suggestions incorporated in tracked changes. With just a few clicks, you save on the hours of time you would have otherwise spent assessing and revising your article for submission. This trusted AI writing assistant also allows you to revise and check the same article multiple times at no additional cost!

Get the premium editing your paper needs and deserves.

Paperpal uses cutting-edge machine learning trained on millions of editorial corrections performed

Kick-start your publishing journey

One study by a Harvard research team (Evans et al., 2018) on the outcomes of psychology doctoral dissertations estimated only 25% of the theses submitted are published in peer-reviewed journals. Converting your PhD dissertation into a journal article is hard work for students. You need to restructure your article and ensure it meets the target journal’s very specific technical requirements, including referencing style and manuscript structure. This is not easy and not everyone gets it right. Journal manuscript submissions often get desk rejected because they don’t pass the basic technical checks. With already lengthy submission turnaround times, desk rejection due to often avoidable issues further delays your journey to publication success. We know how frustrating these delays can be, so we’re inspiring change. Paperpal’s AI tools can help speed up and improve the academic writing, English editing, and journal submission process for students and researchers. Not only can Paperpal refine your academic writing skills, it helps you ensure your article meets the language and technical requirements for publication.

The smartest way to submission readiness

If you’re ready with a research manuscript and are struggling with final checks before you submit, Paperpal for Manuscript can help. All you need to do is upload your paper and Paperpal’s AI will do a thorough check before presenting you with a detailed evaluation with all the issues flagged. For just $29, you can download a comprehensively edited version of your article with all errors marked up, allowing you to review and revise your document in minutes. You can review and accept suggested changes and check your revised manuscript as many times as needed to get it submission ready, at no extra cost.

From PhD thesis writing to getting published in top journals, we’re here to help!

Paperpal uses cutting-edge machine learning trained on millions of editorial corrections performed

Want smart tips and practical strategies to help you write and edit better, faster?

Get Paperpal

Regional Websites

China Flag

Connect with us

Shape the future

We are always looking for inspiration, feedback, and ideas. With your help we can make Paperpal even more amazing together!

Home

Master Thesis Program

In the spring of 2023, the Master Thesis Program welcomed students from Chalmers University of Technology, the Swedish University of Agricultural Sciences, Gothenburg School of Economics, and partners including Centiro, RISE, Zenseact, SLU, Recorded Future, Volvo Cars, and AstraZeneca. We are now ready for a new round starting in January 2024 and are looking to start the program in Gothenburg, Linköping, Luleå, Lund and Örebro.

Two women sitting in front of a computer

Drawing upon the valuable insights shared by students and the wealth of experience gained from previous program cycles, we aim to provide support to the new cohort of students in some of the tasks that are well-known for being commonly overlooked. We invite the students to a series of workshops (once a month from February to April) to help them develop their soft skills, such as giving a talk and writing a master thesis. They will be welcomed into our offices for the period of their thesis work and we will offer a quiet space, access to our infrastructure and technical support, not to mention a friendly environment.

Is your organization interested in gaining value and new perspectives from masters students at some of the top universities in Sweden?

How to take part (as a supervisor)

Please register your interest by sending an email with a description of your suggested topic for one or more master theses to  [email protected]  and  [email protected]  and post it on  My.AI  under  Resources/Master Thesis Proposals  by  October 31 .

The description should include:

  • Name of the project
  • Short description of the topic
  • Your organization
  • Your contact person and email address
  • Preferred student background
  • ​Preferred location

The thesis work will begin in January 2024. We will help you spread the word and recruit the best students for your project.

What can partners expect? Student-industry collaborations are mutually beneficial. They create opportunities to share new insights and points of view while building competence and expertise on both sides.

Benefit from the knowledge of master students to solve current problems or dig deeper into existing tasks. Boost attractiveness to possible future employees by joining these collaborations. Opportunity to recruit students directly before they graduate.

How to take part (as a student) If you already have an AI-related thesis, express your interest in the program by sending your application. 

We rarely provide master thesis projects from AI Sweden, but encourage you to go to My AI to find suggestions from partners and other organizations in the AI ecosystem. You will find suggestions under Resources - Master Thesis (Resurser - Exjobbsförslag). The page will be updated with new proposals regularly.

If you are looking for a thesis topic/project, go to  My.AI  under  Resources/Master Thesis  and contact the reference person of the master thesis project that interests you and hopefully, we will see you around in January. Once you find a project, send your application to participate in the program  here .

What can candidates expect?

  • Valuable experience from working with real projects and applying theoretical knowledge.
  • A community where you can benefit from each other’s experiences and challenges in different organizations and sectors.
  • Attend exclusive events, lectures, and other activities.
  • Working with the best in the field across Sweden.

About the program Students who enter the program work on the topics that matter most to partners and sit at one of our locations. They benefit from events and seminars from AI Sweden experts during the spring focused on growing their soft skills and providing extra support on common struggles, such as the writing process and defining and sticking to the research question that they want to address in their thesis work. Last but not least, they will have access to the  Data Factory , the  Edge Learning Lab  and  DGX A100 , and other resources. 

See examples of previous students' topics on My AI

In spring 2022 the cohort welcomed students from Chalmers and SLU, and partners including Zenseact, SLU, Unibap, Volvo, and AstraZeneca. Some of the dissertations picked up on challenges and created greater awareness within topics such as cyber security, both at partners and AI Sweden.

"This is a great example of a low risk, low effort initiative, where organizations can explore and develop new areas ." - Kim Henriksson, Project Manager at AI Sweden’s Edge Lab

For more information, contact

Sofia Hedén

→  Full contact details

Newsletter Linkedin Youtube Github Spotify

AI Sweden Newsletter

Sign up for the latest in Swedish AI

AI Sweden is formally hosted by Lindholmen Science Park AB .

→  Privacy policy

How to Write a Better Thesis Statement Using AI (2023 Updated)

How to Write a Better Thesis Statement Using AI (2023 Updated)

Table of contents

master thesis ai

Meredith Sell

With the exceptions of poetry and fiction, every piece of writing needs a thesis statement. 

- Opinion pieces for the local newspaper? Yes. 

- An essay for a college class? You betcha.

- A book about China’s Ming Dynasty? Absolutely.

All of these pieces of writing need a thesis statement that sums up what they’re about and tells the reader what to expect, whether you’re making an argument, describing something in detail, or exploring ideas.

But how do you write a thesis statement? How do you even come up with one?

master thesis ai

This step-by-step guide will show you exactly how — and help you make sure every thesis statement you write has all the parts needed to be clear, coherent, and complete.

Let’s start by making sure we understand what a thesis is (and what it’s not).

What Is a Thesis Statement?

A thesis statement is a one or two sentence long statement that concisely describes your paper’s subject, angle or position — and offers a preview of the evidence or argument your essay will present.

A thesis is not:

  • An exclamation
  • A simple fact

Think of your thesis as the road map for your essay. It briefly charts where you’ll start (subject), what you’ll cover (evidence/argument), and where you’ll land (position, angle). 

Writing a thesis early in your essay writing process can help you keep your writing focused, so you won’t get off-track describing something that has nothing to do with your central point. Your central point is your thesis, and the rest of your essay fleshes it out.

Get help writing your thesis statement with this FREE AI tool > Get help writing your thesis statement with this FREE AI tool >

writing a thesis statement with AI

Different Kinds of Papers Need Different Kinds of Theses

How you compose your thesis will depend on the type of essay you’re writing. For academic writing, there are three main kinds of essays:

  • Persuasive, aka argumentative
  • Expository, aka explanatory

A persuasive essay requires a thesis that clearly states the central stance of the paper , what the rest of the paper will argue in support of. 

Paper books are superior to ebooks when it comes to form, function, and overall reader experience.

An expository essay’s thesis sets up the paper’s focus and angle — the paper’s unique take, what in particular it will be describing and why . The why element gives the reader a reason to read; it tells the reader why the topic matters.

Understanding the functional design of physical books can help ebook designers create digital reading experiences that usher readers into literary worlds without technological difficulties.

A narrative essay is similar to that of an expository essay, but it may be less focused on tangible realities and more on intangibles of, for example, the human experience.

The books I’ve read over the years have shaped me, opening me up to worlds and ideas and ways of being that I would otherwise know nothing about.

As you prepare to craft your thesis, think through the goal of your paper. Are you making an argument? Describing the chemical properties of hydrogen? Exploring your relationship with the outdoors? What do you want the reader to take away from reading your piece?

Make note of your paper’s goal and then walk through our thesis-writing process.

Now that you practically have a PhD in theses, let’s learn how to write one:

How to Write (and Develop) a Strong Thesis

If developing a thesis is stressing you out, take heart — basically no one has a strong thesis right away. Developing a thesis is a multi-step process that takes time, thought, and perhaps most important of all: research . 

Tackle these steps one by one and you’ll soon have a thesis that’s rock-solid.

1. Identify your essay topic.

Are you writing about gardening? Sword etiquette? King Louis XIV?

With your assignment requirements in mind, pick out a topic (or two) and do some preliminary research . Read up on the basic facts of your topic. Identify a particular angle or focus that’s interesting to you. If you’re writing a persuasive essay, look for an aspect that people have contentious opinions on (and read our piece on persuasive essays to craft a compelling argument).

If your professor assigned a particular topic, you’ll still want to do some reading to make sure you know enough about the topic to pick your specific angle.

For those writing narrative essays involving personal experiences, you may need to do a combination of research and freewriting to explore the topic before honing in on what’s most compelling to you.

Once you have a clear idea of the topic and what interests you, go on to the next step.

2. Ask a research question.

You know what you’re going to write about, at least broadly. Now you just have to narrow in on an angle or focus appropriate to the length of your assignment. To do this, start by asking a question that probes deeper into your topic. 

This question may explore connections between causes and effects, the accuracy of an assumption you have, or a value judgment you’d like to investigate, among others.

For example, if you want to write about gardening for a persuasive essay and you’re interested in raised garden beds, your question could be:

What are the unique benefits of gardening in raised beds versus on the ground? Is one better than the other?

Or if you’re writing about sword etiquette for an expository essay , you could ask:

How did sword etiquette in Europe compare to samurai sword etiquette in Japan?

How does medieval sword etiquette influence modern fencing?

Kickstart your curiosity and come up with a handful of intriguing questions. Then pick the two most compelling to initially research (you’ll discard one later).

3. Answer the question tentatively.

You probably have an initial thought of what the answer to your research question is. Write that down in as specific terms as possible. This is your working thesis . 

Gardening in raised beds is preferable because you won’t accidentally awaken dormant weed seeds — and you can provide more fertile soil and protection from invasive species.

Medieval sword-fighting rituals are echoed in modern fencing etiquette.

Why is a working thesis helpful?

Both your research question and your working thesis will guide your research. It’s easy to start reading anything and everything related to your broad topic — but for a 4-, 10-, or even 20-page paper, you don’t need to know everything. You just need the relevant facts and enough context to accurately and clearly communicate to your reader.

Your working thesis will not be identical to your final thesis, because you don’t know that much just yet.

This brings us to our next step:

4. Research the question (and working thesis).

What do you need to find out in order to evaluate the strength of your thesis? What do you need to investigate to answer your research question more fully? 

Comb through authoritative, trustworthy sources to find that information. And keep detailed notes.

As you research, evaluate the strengths and weaknesses of your thesis — and see what other opposing or more nuanced theses exist. 

If you’re writing a persuasive essay, it may be helpful to organize information according to what does or does not support your thesis — or simply gather the information and see if it’s changing your mind. What new opinion do you have now that you’ve learned more about your topic and question? What discoveries have you made that discredit or support your initial thesis?

Raised garden beds prevent full maturity in certain plants — and are more prone to cold, heat, and drought.

If you’re writing an expository essay, use this research process to see if your initial idea holds up to the facts. And be on the lookout for other angles that would be more appropriate or interesting for your assignment.

Modern fencing doesn’t share many rituals with medieval swordplay.

With all this research under your belt, you can answer your research question in-depth — and you’ll have a clearer idea of whether or not your working thesis is anywhere near being accurate or arguable. What’s next?

5. Refine your thesis.

If you found that your working thesis was totally off-base, you’ll probably have to write a new one from scratch. 

For a persuasive essay , maybe you found a different opinion far more compelling than your initial take. For an expository essay , maybe your initial assumption was completely wrong — could you flip your thesis around and inform your readers of what you learned?

Use what you’ve learned to rewrite or revise your thesis to be more accurate, specific, and compelling.

Raised garden beds appeal to many gardeners for the semblance of control they offer over what will and will not grow, but they are also more prone to changes in weather and air temperature and may prevent certain plants from reaching full maturity. All of this makes raised beds the worse option for ambitious gardeners. 

While swordplay can be traced back through millennia, modern fencing has little in common with medieval combat where swordsmen fought to the death.

If you’ve been researching two separate questions and theses, now’s the time to evaluate which one is most interesting, compelling, or appropriate for your assignment. Did one thesis completely fall apart when faced with the facts? Did one fail to turn up any legitimate sources or studies? Choose the stronger question or the more interesting (revised) thesis, and discard the other.

6. Get help from AI

To make the process even easier, you can take advantage of Wordtune's generative AI capabilities to craft an effective thesis statement. You can take your current thesis statement and try the paraphrase tool to get suggestions for better ways of articulating it. WordTune will generate a set of related phrases, which you can select to help you refine your statement. You can also use Wordtune's suggestions to craft the thesis statement. Write your initial introduction sentence, then click '+' and select the explain suggestion. Browse through the suggestions until you have a statement that captures your idea perfectly.

master thesis ai

Thesis Check: Look for These Three Elements

At this point, you should have a thesis that will set up an original, compelling essay, but before you set out to write that essay, make sure your thesis contains these three elements:

  • Topic: Your thesis should clearly state the topic of your essay, whether swashbuckling pirates, raised garden beds, or methods of snow removal.
  • Position or angle: Your thesis should zoom into the specific aspect of your topic that your essay will focus on, and briefly but boldly state your position or describe your angle.
  • Summary of evidence and/or argument: In a concise phrase or two, your thesis should summarize the evidence and/or argument your essay will present, setting up your readers for what’s coming without giving everything away.

The challenge for you is communicating each of these elements in a sentence or two. But remember: Your thesis will come at the end of your intro, which will already have done some work to establish your topic and focus. Those aspects don’t need to be over explained in your thesis — just clearly mentioned and tied to your position and evidence.

Let’s look at our examples from earlier to see how they accomplish this:

Notice how:

  • The topic is mentioned by name. 
  • The position or angle is clearly stated. 
  • The evidence or argument is set up, as well as the assumptions or opposing view that the essay will debunk.

Both theses prepare the reader for what’s coming in the rest of the essay: 

  • An argument to show that raised beds are actually a poor option for gardeners who want to grow thriving, healthy, resilient plants.
  • An exposition of modern fencing in comparison with medieval sword fighting that shows how different they are.

Examine your refined thesis. Are all three elements present? If any are missing, make any additions or clarifications needed to correct it.

It’s Essay-Writing Time!

Now that your thesis is ready to go, you have the rest of your essay to think about. With the work you’ve already done to develop your thesis, you should have an idea of what comes next — but if you need help forming your persuasive essay’s argument, we’ve got a blog for that.

Share This Article:

Preparing for Graduate School: 8 Tips to Know

Preparing for Graduate School: 8 Tips to Know

How to Master Concise Writing: 9 Tips to Write Clear and Crisp Content

How to Master Concise Writing: 9 Tips to Write Clear and Crisp Content

Title Case vs. Sentence Case: How to Capitalize Your Titles

Title Case vs. Sentence Case: How to Capitalize Your Titles

Looking for fresh content, thank you your submission has been received.

Generative AI is allowing golf fans to get closer to the Masters Tournament than ever before. Here's how.

By Noah Syken, Vice President of Sports and Entertainment Partnerships, IBM

For 90 years, Augusta National Golf Course has awed and delighted players and patrons alike at the annual rite of spring known as the Masters Tournament. It is a course that holds both history and heartbreak, challenging some of the world's greatest golfers as they chase the most coveted prize in golf: the Green Jacket. 

That's because Augusta National, undeniably beautiful and impeccably maintained, is deceivingly difficult, with subtle defenses not always obvious to the casual observer. The greens are true but undulating and lightning fast. The fairways are hilly and sloped in ways that produce very few even lies. This is not a course that can simply be overpowered. Winning the Masters requires precision, intelligence, and insight. 

Players gather insight about the course through experience. They walk the course, play practice rounds, study yardage books, and confer with their caddies. But for those of us not lucky enough to step foot inside the ropes, there is another way to unlock the mysteries of Augusta National: AI-powered features in the Masters app.  

To get fans closer to the Masters than ever before, IBM worked with the Masters digital team to infuse generative AI into the 2024 Masters app. Hole Insights with IBM watsonx  is a new feature that adds context to every shot, on every hole. The moment a ball comes to rest, the x, y, and z coordinates are captured, analyzed, and compared against eight years of Masters historical data. 

For example, if a player hits their drive to the right side of the fairway on hole 13, the generative AI model might produce the following insight: "From this spot, players make birdie or better 39% of the time, versus 31% from the left side of the fairway." We're using similar AI capabilities to add spoken narration, in both English and Spanish, to more than 20,000 video clips of every shot in the tournament.

As I watched this work come together over the last few months, I couldn't help but think about the similarities between winning the Masters and building a great AI experience. Augusta National demands a complete game from tee to green. Long drives certainly help. But without precise iron play and smooth putting, you may not even make the weekend, let alone don the Green Jacket. 

Similarly, it takes more than just a powerful foundation model to build a meaningful AI. First, these models need to be fueled by an organization's trusted data. They also need to be managed and monitored throughout their lifetime. And don't forget the user experience. How and when AI-generated content is delivered to an end user is an important factor in determining its value. 

In tech industry circles, there's a tendency to count parameters or tokens to quantify the power of an AI model. But IBM's partnership with the Masters is about more than raw power; it's about trust. The Masters is one of the most valuable brands in the world. Everything about the experience, whether in person, online, and on television, is managed meticulously. Failure is not an option. So when Augusta National Golf Club decides to use AI capabilities in its award-winning app, they need to know the models are going to enhance the fan experience and operate as intended. 

IBM watsonx can carry the Masters into the age of generative AI. We can manage the lifecycle of their AI models, from curating their trusted data, to training open-source foundation models, and managing and monitoring the results. In other words, we've used every club in the bag. 

Find out more about IBM at the Masters here. 

This post was created by IBM with Insider Studios .

master thesis ai

  • Main content

Mastering cloud economics in the era of AI adoption

How can businesses master cloud economics in an era of rapid AI adoption?

A graphic image of a cloud set in a digital background.

The acceleration of artificial intelligence (AI) adoption has had significant implications for enterprise cloud economics. As businesses invest heavily in AI, they must also focus on managing escalating cloud costs strategically to remain competitive in this transformative era of AI. In this article, we look at the steps business can take to navigate the economic terrain of cloud computing .

Governance and process optimization

In cloud economics, the proliferation of costs poses a significant challenge due to the lack of effective governance. To address this, businesses must take proactive steps in establishing a robust governance framework for their AI services. This involves defining a predetermined set of services tailored to the organization's specific needs, coupled with the creation of clear service level agreements (SLAs). These SLAs outline performance metrics, availability and support for each service, ensuring transparency and accountability in the utilization of AI resources.

To improve the efficiency of AI workload deployment, organizations can adopt landing zone templates. These templates, configured for various tasks such as custom AI models, NLP, speech, and vision recognition, provide a consistent foundation for resource deployment.

Additionally, integration of automated onboarding and offboarding processes can be implemented to minimize manual intervention and errors. Organizations should also establish a standardized chargeback and pricing mechanism, offering transparent tracking of AI service costs and facilitating informed decisions based on resource consumption patterns. Furthermore, the adoption of a structured invoice reconciliation process ensures financial transparency by promptly monitoring and addressing billing discrepancies.

Head of Cloud Consulting and Engineering Services, EY UK.

Cloud resource tracking and optimization

Deploying AI resources can be straightforward but managing them wisely reduces the total cost of ownership. Enforcing tagging best practices allows businesses to logically group resources for effective tracking, providing visibility into each resource's purpose and ownership which aids in efficient cost allocation and management.

When developing custom AI models, it's crucial to right-size resources for optimization. Adjusting CPU or GPU cores, optimizing SKUs, and fine-tuning database , storage, and networking configurations aligns resources with actual requirements, preventing unnecessary expenses.

Training AI models can be resource intensive. Embracing containerization (e.g., Kubernetes) and serverless computing offers flexibility in managing AI workloads efficiently.

Are you a pro? Subscribe to our newsletter

Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

For customer AI development, factors like spot/reserve instances, license cost optimization through Bring Your Own License (BYOL), and cloud parking or power scheduling in development and testing environments can lead to significant cost savings. Additionally, optimizing AI services like vision and NLP based on specific requirements, such as face detection, OCR , landmark identification, object detection, speech-to-text, and text-to-speech, should be tailored to usage volume for efficient resource utilization.

Cost management tools and best practices

Organizations should try to optimize a number of cost management and network monitoring tools which can help to manage cost across multiple cloud platforms.

FinOps tools can provide real-time visibility into spending. This allows organizations to monitor and control costs more efficiently and help teams understand the financial impact of their cloud activities so they can make informed decisions to optimize resource usage.

Continuous monitoring of spending levels is essential to identify potential cost overruns or unexpected expenses. By proactively monitoring and setting alerts, organizations can set thresholds and receive notifications when spending approaches or exceeds predefined limits, enabling them to control costs before they escalate. A cloud cost reporting dashboard provides a centralized view of cost-related metrics and trends. It consolidates data from various cloud services and presents it in a user-friendly interface, and allows stakeholders to analyze spending patterns, identify cost drivers, and make informed decisions regarding resource allocation and optimization.

Another option is modelling different scenarios to assess the potential impact on costs. This analysis can aid in demand forecasting, enabling better preparedness for changing business requirements.

Subscription optimization

This involves managing the number of data analytics and AI landing zones across different environments, including development, testing, route to live (RTL), and live production to ensure resources are provisioned based on actual demand. Organizations should align subscription levels with the specific needs of each environment to achieve cost-efficient resource utilization.

Training the IT workforce

Effective cost management requires collaboration between development and operations teams. Conducting cost management training for these teams builds awareness of cost implications and instils best practices for cost optimization. Creating a cost-conscious IT community fosters a culture of financial responsibility and ensures that cost considerations are integral to decision-making processes.

Cloud service provider contract optimization

As organizations increase consumption of AI services renegotiating enterprise agreements with cloud service providers becomes essential. Changes in consumption patterns may lead to shifts in cost structures and renegotiating contracts allows organizations to align agreements with their evolving needs, potentially securing more favorable terms.

At a time of exponential AI adoption, finding the perfect balance between technological innovation and cost efficiency is no mean feat; however, with the right strategies in place, organizations can successfully navigate the evolving landscape of cloud economics.

We've listed the best cloud cost management services .

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Abhi Chatterjee, Head of Cloud Consulting and Engineering Services, EY UK.

This stealthy new malware can apparently avoid all antivirus scanners

Russian hackers were able to steal US government emails after attacking Microsoft

Huge weekend sale at Best Buy – shop the 13 best deals I recommend

Most Popular

  • 2 Overpriced or simply premium? I'm trying to decide if iPhones are a rip-off
  • 3 Android phones finally get their first AirTag-style trackers – here's how they work
  • 4 Meta is on the brink of releasing AI models it claims to have "human-level cognition" - hinting at new models capable of more than simple conversations
  • 5 Switching broadband is going to get easier thanks to these new 'nutrition' labels

master thesis ai

IMAGES

  1. Best AI for Writing Thesis 2023

    master thesis ai

  2. How to choose a master thesis topic for AI and machine learning

    master thesis ai

  3. Latest Thesis Samples in AI Research| S-Logix

    master thesis ai

  4. How to Write a Better Thesis Statement Using AI (2023 Updated)

    master thesis ai

  5. Master Thesis

    master thesis ai

  6. How to Create a Thesis

    master thesis ai

VIDEO

  1. How to choose master thesis for AI and ML

  2. How To Use ChatGPT For Academic Thesis

  3. The 10 Best FREE AI Tools To Use For Your Thesis

  4. How to write your Master Thesis

  5. Masters Architecture Thesis on Metaverse

  6. Using ChatGPT to generate a research dissertation and thesis. It is our research writing assistant

COMMENTS

  1. AI for thesis writing

    Justdone. JustDone is an AI for thesis writing and content creation. It offers a straightforward three-step process for generating content, from choosing a template to customizing details and enjoying the final output. AI for thesis writing - Justdone. JustDone AI can generate thesis drafts based on the input provided by you.

  2. The 11 best AI tools for academic writing

    2. Genei. Best for Summarizing ($15.99/month). Genei has established itself as a prominent player in the realm of academic AI tools, and rightfully so. As a comprehensive tool designed for academics, Genei goes beyond assisting with workflow organization and document storage—it also offers a plethora of features tailored specifically for academic writing.

  3. Master Thesis Topics in Artificial Intelligence

    Machine Learning for Supply Chain Optimization. Time Series Analysis & Forecasting of Events (Sales, Demand, etc.) Integrated vs. separated optimization: theory and practice. Leveraging deep learning to build a versatile end-to-end inventory management model. Reinforcement learning for the vehicle routing problem.

  4. Artificial Intelligence · University of Basel · Completed Theses

    Master's thesis, December 2022.Download: (PDF) (slides; PDF) (sources; ZIP) A Digital Microfluidic Biochip (DMFB) is a digitally controllable lab-on-a-chip. Droplets of fluids are moved, merged and mixed on a grid. Routing these droplets efficiently has been tackled by various different approaches.

  5. Thesis project

    Thesis project. In the final thesis project, the student carries out a research project under the supervision of one of the staff members of the research groups offering the AI programme. The project can be done based at Utrecht University, at a company or research institute, or at a foreign university (see also: ' stay abroad - traineeship ').

  6. AI can write your master's thesis. Now what?

    Master's thesis cover page: courtesy of AI-powered Microsoft PowerPoint. In what follows, I will go through the steps I have followed to generate the introductory chapter for a hot topic at the ...

  7. PDF Master in Artificial Intelligence Master Thesis

    Master in Artificial Intelligence Master Thesis Analysis of Explainable Artificial Intelligence on Time Series Data Author: Supervisors: NataliaJakubiak MiquelSànchez-Marrè CristianBarrué Department: DepartmentofComputerScience Facultat d'Informatica de Barcelona (FIB) Universitat Politècnica de Catalunya (UPC) - BarcelonaTech October 2022

  8. PDF The use of artificial intelligence (AI) in thesis writing

    Text generator (chatbot) based on artificial intelligence and developed by the company OpenAI. Aims to generate conversations that are as human-like as possible. Transforms input into output by "language modeling" technique. Output texts are generated as the result of a probability calculation.

  9. Semester and Thesis Projects

    In this master's thesis project, we are looking for a candidate to apply machine learning techniques to correct and predict signals of incomplete CT perfusion imaging for ischemic stroke. We hope to use machine learning techniques to de-noise and correct for the truncation in CT perfusion signals.

  10. How to write a Master thesis in Artificial Intelligence ...

    In 2020, I wrote my Bachelor's thesis. Despite a supportive supervisor and a decent final result, the process itself was often unpleasant, full of self-doubt and poor work-life balance. When I…

  11. 12 Best Artificial Intelligence Topics for Thesis and Research

    In this blog, we embark on a journey to delve into 12 Artificial Intelligence Topics that stand as promising avenues for thorough research and exploration. Table of Contents. 1) Top Artificial Intelligence Topics for Research. a) Natural Language Processing. b) Computer vision. c) Reinforcement Learning. d) Explainable AI (XAI)

  12. PDF Healthcare in the Age of AI: A Qualitative Study on the Perceptions and

    Healthcare in the Age of AI: A Qualitative Study on the Perceptions and Acceptance of AI Health Chatbots Among Patients and Healthcare Professionals A Master's Thesis by Radoslav Iliev Kanalev Student Number: raka21ad Programme: MSc Business Administration and Innovation in Health Care Supervisor: John Christiansen Number of characters: 178.076

  13. FIU Libraries: Artificial Intelligence: Dissertations & Theses

    Many universities provide full-text access to their dissertations via a digital repository. If you know the title of a particular dissertation or thesis, try doing a Google search. OATD (Open Access Theses and Dissertations) Aims to be the best possible resource for finding open access graduate theses and dissertations published around the world with metadata from over 800 colleges ...

  14. Thesis Topics

    Thesis Topics. This list includes topics for potential bachelor or master theses, guided research, projects, seminars, and other activities. Search with Ctrl+F for desired keywords, e.g. 'machine learning' or others. PLEASE NOTE: If you are interested in any of these topics, click the respective supervisor link to send a message with a ...

  15. PhD Dissertations

    The Machine Learning Department at Carnegie Mellon University is ranked as #1 in the world for AI and Machine Learning, we offer Undergraduate, Masters and PhD programs. Our faculty are world renowned in the field, and are constantly recognized for their contributions to Machine Learning and AI.

  16. Thesis Topic Proposals 2022-2023

    The deadline for submitting this form is 30th of October, 2022. (!!) Below you can see the thesis topics for 2022-2023.We offer 3 different thesis formats:- Format 1 : Regular thesis (fully supervised by KU Leuven)- Format 2 : Thesis in cooperation with a company (supervised by KU Leuven and the company)- Format 3 : Thesis with a company ...

  17. AI Masters

    AI masters class, 2021-2022. This is the webpage for AI master students at NTNU 2021-2022. This site contains useful information and resources, free to use by the students. ... How to write a thesis: Slides: Anders Kofod-Petersen: 26/10/2021: 14:00 - 16:00: ITV 454: Using the HPC at NTNU Reproducibility: Slides Slides: Jan Christian Meyer Odd ...

  18. Master Thesis Templates

    Master Thesis Presentation Template. A master thesis is crucial to rounding out your time and knowledge learned in your upper-level education courses. And now you can create an A-worthy thesis in half the time with Beautiful.ai's master thesis template. Our customizable template has all the basics to help you sum up your comprehensive ...

  19. Free AI Thesis Generator & Thesis Statement Writer

    Smodin's free thesis generator is an essential tool for students and researchers looking to streamline the process of writing a research paper. With advanced AI algorithms, our tool quickly generates clear and concise thesis statements based on your topic and requirements, ensuring your research paper is of the highest quality.

  20. AI Tool for Thesis Writing

    Try our AI tool for thesis writing, improve your academic writing skills and deliver a high-quality PhD thesis with the highest chance of success. Paperpal is trusted by top global publishers and authors across 125 countries. ... Whether you're a student studying for a doctorate or master's degree, your lecturer's priority is enhancing ...

  21. Master Thesis Program

    How to take part (as a supervisor) Please register your interest by sending an email with a description of your suggested topic for one or more master theses to [email protected] and [email protected] and post it on My.AI under Resources/Master Thesis Proposals by October 31. The description should include: Name of the project.

  22. How to Write a Better Thesis Statement Using AI (2023 Updated)

    Once you have a clear idea of the topic and what interests you, go on to the next step. 2. Ask a research question. You know what you're going to write about, at least broadly. Now you just have to narrow in on an angle or focus appropriate to the length of your assignment.

  23. How AI Is Making the Masters Tournament More Accessible for Fans

    To get fans closer to the Masters than ever before, IBM worked with the Masters digital team to infuse generative AI into the 2024 Masters app. Hole Insights with IBM watsonx is a new feature that ...

  24. Mastering cloud economics in the era of AI adoption

    These SLAs outline performance metrics, availability and support for each service, ensuring transparency and accountability in the utilization of AI resources. To improve the efficiency of AI ...