GenAI as (your very own) PhD Assistant
Overview
The landscape of academic research is rapidly evolving. With increasing competition and standards for methodological rigor that seem to climb higher every year, Generative AI presents a critical opportunity: it is not just about keeping up, but about surviving the PhD process with your enthusiasm intact.
In this workshop we’ll focus on the use of Generative AI in research, specifically for the arduous and often tumultuous PhD process. We will explore how these tools can serve as an agentic research assistant: something participants can discuss ideas with, use to challenge assumptions, and iterate with across complex, multi-step tasks. From initial ideation to literature review and the creative process of turning data into insight, we’ll cover workflows and tools that do more than lighten the PhD workload: they help participants strengthen the parts of the research process where human judgment matters most. The key idea is that AI should not be framed as a text generator, but as a structured conversational partner operating in an Agentic Loop of propose, critique, verify, and revise. Crucially, we prioritize responsible use, ensuring that while your assistant is artificial, your integrity remains 100% intact.
Learning Outcomes
By the end of this workshop, participants will be able to:
- Match a concrete PhD task to an appropriate AI or non-AI tool;
- Use an agentic assistant as a discussion partner to explore alternatives, surface assumptions, and refine ideas;
- Run a source-grounded workflow for literature discovery and verification using their own research topic;
- Use AI as a critic to stress-test a research question, methodology, or argument without outsourcing judgment;
- Apply a responsible-use checklist covering privacy, verification, and authorship;
- Leave with a small, practical toolkit they can use immediately in their own PhD workflow.
Part 1: From Text Generator to Agentic Research Partner
Open by positioning GenAI as a research assistant, not an author, supervisor, or methodological authority. The key message is that these tools are most useful when they reduce friction in high-effort tasks such as framing questions, exploring literature, critiquing arguments, and improving clarity, but they do not replace judgment, reading, or accountability.
Make the contrast explicit:
- A weak use of AI is asking it to produce text and accepting the result too quickly.
- A stronger, more agentic use is treating it as a partner you can question, correct, redirect, and use to test your own thinking.
- The value comes from the back-and-forth, not from the first answer.
Instead of opening with a long taxonomy of models, make a simpler distinction between three roles:
- Conversational agents for brainstorming, explanation, first-pass critique, and idea development.
- Source-grounded tools for literature discovery, synthesis, and citation checking.
- Reference and note-management tools that help participants keep control of their evidence base over time.
Close this part with a responsible-use checklist:
- Always verify citations and read the underlying papers.
- A real paper does not guarantee a correct summary or interpretation.
- Check institutional rules before uploading unpublished or sensitive material.
- Never upload identifiable participant data.
- Treat AI output as a draft for inspection, not as knowledge you can reuse automatically.
Part 2: Four High-Value Agentic Workflows for PhD Work
Frame this section around four recurring PhD bottlenecks. This is more concrete than speaking abstractly about “agents”, and it gives participants a clearer sense of immediate usefulness. The common thread across the four workflows is that the participant is not simply asking for output; they are engaging an assistant in an iterative dialogue.
- Workflow 1 — Research question framing and scoping:
- Use a general-purpose model with a stable project context to generate alternative formulations of the research problem, possible boundaries, assumptions, and trade-offs.
- Ask it to respond like a critical colleague: What is vague here? What is too broad? What is implied but not stated? Which alternative formulations would make the study more rigorous or feasible?
- The goal is not to let the model choose the question, but to help the researcher see sharper options and hidden assumptions through dialogue.
- Expected output: a narrower research question, clearer scope, or a short list of candidate angles.
- Workflow 2 — Literature discovery, triangulation, and citation checking:
- Use tools such as Consensus, Elicit, ResearchRabbit, and scite to find relevant papers, compare claims, trace citation networks, and surface contradictions or missing perspectives.
- Treat the assistant as a scout and discussion partner: ask what perspectives seem underrepresented, which claims need stronger evidence, and which adjacent literatures might be missing.
- Use a reference manager such as Zotero to keep the resulting corpus organized and reusable.
- Expected output: a starter corpus, a map of supporting vs. conflicting positions, and a shortlist of papers that actually need close reading.
- Workflow 3 — Methodology critique and reviewer-style stress testing:
- Use the “rubber ducking” logic from the decks by asking participants to explain their method step by step, then ask the model to identify unstated assumptions, missing controls, validity threats, unclear logic, or weak baselines.
- Follow this with adversarial prompts such as “What could make this study fail?” or “Which reviewer objections are most plausible here?”
- This is one of the clearest examples of the agentic value: the model becomes a skeptical interlocutor that participants can bounce ideas off before they commit to a design.
- Expected output: a stronger methods paragraph, a risk checklist, or a list of questions to discuss with a supervisor.
- Workflow 4 — Writing support and rebuttal preparation:
- Use a general-purpose model or a source-grounded notebook workflow to improve structure, clarity, transitions, and argument flow without delegating factual claims.
- Ask the assistant to play different conversational roles: a skeptical reviewer, a confused reader, or a constructive peer who points out jumps in logic and unclear contributions.
- This is also a good place to simulate reviewer questions, identify overclaims, and prepare short rebuttal notes before submission.
- Expected output: a clearer abstract, introduction, or argument structure, plus a list of likely objections.
Part 3: Curated Toolset by Task
Keep this section short and practical. The point is not to impress participants with how many tools exist; the point is to help them leave with a small toolkit they can actually remember and use.
Suggested curated stack:
- For project context, brainstorming, and iterative drafting: ChatGPT Projects or an equivalent persistent-chat workflow.
- For source-grounded reading across a small corpus: NotebookLM.
- For literature discovery and research overviews: Consensus and Elicit.
- For citation context, support, and contradiction checks: scite.
- For citation-network exploration and finding adjacent papers: ResearchRabbit.
- For managing the evidence base over time: Zotero.
- For stage-aware support across the doctoral lifecycle: the
phd-assistantskill, which operationalizes the workshop logic through modes such asCritical colleague,Literature scout,Skeptical reviewer,Project orchestrator, andWriting editor.
Selection rule:
- Use general-purpose models when the task is exploratory, reflective, or editorial, especially when participants need a partner to think with.
- Use specialized research tools when the task depends on sources, citations, or coverage of the literature.
- Use note/reference management tools to keep evidence, quotes, and PDFs under your control rather than scattered across chats.
- Use the
phd-assistantskill when the challenge is not just “write this” but “help me at the right PhD stage, with the right artifact, and the right decision in view.”
Part 4: Hands-On Exercise with a Real PhD Artifact
The exercise should be tied to one concrete PhD artifact so participants can see a direct path from tool to task.
Suggested exercise flow:
- Ask each participant to choose one artifact: a research question, a literature claim, a methods paragraph, or an abstract.
- Step 1: identify the real task — refining scope, checking evidence, stress-testing methodology, or improving clarity — and name the PhD stage, the current artifact, and the immediate decision or bottleneck.
- Step 2: start with a dialogue prompt aimed at critique rather than generation; ask the assistant to first respond with questions, concerns, alternative framings, or missing assumptions.
- Step 3: ask a follow-up question that pushes the dialogue forward: which option is most defensible, what evidence is missing, or what would a skeptical reviewer challenge first?
- Step 4: verify at least one claim, citation, or related paper with a source-grounded tool or by reading the original paper.
- Step 5: run a final interaction in which the assistant challenges or reframes the participant’s provisional revision.
- Step 6: revise the artifact once, keeping only the changes the participant judges to be valid.
- Step 7: write down one next action and one boundary they will keep in their own workflow.
This makes the exercise much more transferable than a generic “polish this text” activity, because it teaches a reusable decision pattern: task first, iterative dialogue second, verification always.
Optional facilitation move:
- Structure the exercise through the
phd-assistantskill by asking participants to specify their stage, artifact, and bottleneck before they run any prompt.
Part 5: Wrap-up & Take-Home Toolkit
Close by reinforcing a few practical conclusions:
- AI is most useful when attached to a specific research task, not used as a generic oracle.
- Verification is non-negotiable, especially for literature claims and citations.
- Specialized research tools are usually better than general chatbots for literature work.
- Good PhD use of AI still depends on deep reading, methodological rigor, and clear authorship boundaries.
End with a simple take-home toolkit:
- A one-page matrix mapping common PhD tasks to the most appropriate tool category.
- A short verification checklist.
- Two or three prompt patterns for critique, questioning, and idea-bouncing, not just generation.
- The
phd-assistantskill as a reusable scaffold for applying the workshop logic after the session. - A reminder that the researcher remains responsible for the final claim, method, and interpretation.
Prompt patterns for Agentic Loop-style use:
Prompt Pattern 1 — Critical Colleague
Use this when refining a research question, contribution, or early framing.
I want you to act as a critical but constructive research colleague.
Context:
- Topic: [insert topic]
- Current research question or idea: [insert text]
- Constraints: [discipline, methods, data access, timeframe, supervisor expectations]
Your task:
1. Identify what is vague, too broad, under-justified, or implicitly assumed.
2. Ask me 5 sharp questions that would help improve the idea.
3. Suggest 3 stronger alternative formulations of the research question or contribution.
4. Point out the main trade-offs between these alternatives.
5. Do not write the final answer for me. Help me think.
Be direct, skeptical, and concise. If something is weak, say so clearly.
Expected use:
- Best for early-stage thinking and idea-bouncing.
- Good first step before discussing the topic with a supervisor.
Prompt Pattern 2 — Skeptical Reviewer
Use this when stress-testing a methods section, argument, abstract, or claimed contribution.
I want you to act as a skeptical but fair peer reviewer.
Context:
- Paper/study paragraph: [paste text]
- Claimed contribution: [insert text]
- Intended audience or venue: [insert venue, field, or type of paper]
Your task:
1. Identify the strongest likely criticisms a reviewer could raise.
2. Point out unclear logic, unsupported claims, missing baselines, validity threats, or overclaiming.
3. Tell me what evidence or clarification would be needed to defend this text.
4. Rank the issues by severity: critical, important, minor.
5. End with 3 specific questions I should answer before I move forward.
Do not be polite for the sake of politeness. Be rigorous and specific.
Expected use:
- Best for methodology critique and pre-submission stress testing.
- Useful when participants need to bounce ideas off a demanding interlocutor before committing to a design.
Prompt Pattern 3 — Literature Scout
Use this when exploring a topic, identifying adjacent literatures, or planning a search strategy before deep reading.
I want you to act as a literature scout and research mapping assistant.
Context:
- Topic or question: [insert topic]
- Field or disciplinary lens: [insert field]
- What I already know: [insert known concepts, authors, or papers]
Your task:
1. Suggest the main subtopics or conversations I should examine.
2. Identify adjacent literatures or alternative framings I may be missing.
3. Propose a search strategy: keywords, keyword combinations, and filters.
4. Suggest what kinds of evidence or disagreement I should look for.
5. If you mention papers, authors, or journals, clearly separate:
- items you are confident about
- items that should be treated as tentative and verified independently
Do not pretend to know the literature if you are uncertain. Help me plan the search and inspection process.
Expected use:
- Best used together with source-grounded tools such as Consensus, Elicit, ResearchRabbit, scite, or a reference manager such as Zotero.
- Strong for widening the search space before narrowing it through verification and close reading.
How to Use the phd-assistant Skill Effectively
This skill is most useful when participants use it to structure the conversation, not when they treat it as a shortcut to polished text.
Suggested instructions:
- Start every interaction by naming three things: current PhD stage, current artifact, and immediate decision or bottleneck.
- Add only the constraints that matter for the task: discipline, thesis format, methods orientation, data access, ethics limits, deadline, or supervisor expectations.
- Pick the support mode explicitly:
Critical colleague,Literature scout,Skeptical reviewer,Project orchestrator, orWriting editor. - Ask for diagnosis, questions, options, or a next-step plan before asking for rewriting.
- Use the skill to prepare for supervision as well as writing: for example, ask what should be clarified before a supervisor meeting, what risks need discussion, or what decisions are still open.
- Treat any claims about papers, methods, or disciplinary conventions as verification-sensitive.
- End by asking for the smallest useful next action, and what should be taken back to the supervisor rather than decided by the assistant alone.
Example prompts:
Example 1 — Narrowing a research question
Use the `phd-assistant` skill.
Current PhD stage: early problem framing / proposal preparation
Current artifact: draft research question and a one-paragraph problem statement
Immediate bottleneck: my topic feels too broad and I am not sure what the real unit of analysis should be
Relevant constraints:
- Discipline/domain: information systems
- Thesis format: paper-based thesis
- Methods orientation: design science + case study
- Data/field access: two potential industry partners, but access is still uncertain
- Deadline: proposal draft due in 3 weeks
Support mode: Critical colleague
Please:
1. Diagnose the main weaknesses in the current framing.
2. Ask 5 sharp questions that would help narrow the problem.
3. Suggest 3 more defensible versions of the research question.
4. Explain the trade-offs between them.
5. End with the smallest useful next action and what I should clarify with my supervisor.
Example 2 — Planning a literature search
Use the `phd-assistant` skill.
Current PhD stage: early literature review
Current artifact: a rough reading list and tentative review structure
Immediate bottleneck: I do not know what to read next and I am worried I am missing adjacent conversations
Relevant constraints:
- Discipline/domain: data spaces / digital platforms
- Thesis format: paper-based thesis
- Methods orientation: conceptual + empirical
- Access constraints: some databases available through the university, but not all
Support mode: Literature scout
Please:
1. Map the main subtopics or research conversations I should inspect.
2. Identify adjacent literatures or alternative framings I may be missing.
3. Propose a search strategy with keywords and combinations.
4. Tell me what kinds of disagreement or evidence I should look for.
5. If you mention papers or authors, label anything uncertain as tentative.
6. End with the next search action and what I must verify manually.
Example 3 — Stress-testing a methods section
Use the `phd-assistant` skill.
Current PhD stage: proposal design / methods definition
Current artifact: draft methods section
Immediate bottleneck: I am not sure whether this design is defensible enough for my next supervisor meeting
Relevant constraints:
- Methods orientation: qualitative interviews + document analysis
- Ethics/confidentiality: organizational confidentiality limits what I can share
- Deadline: ethics submission next month
Support mode: Skeptical reviewer
Please:
1. Identify the strongest likely criticisms of this methods section.
2. Point out validity threats, unclear logic, or missing design choices.
3. Rank the issues by severity.
4. Tell me which questions a tough supervisor or reviewer would probably ask first.
5. End with the smallest useful next action and what I should take to my supervisor rather than decide alone.
Example 4 — Getting unstuck mid-PhD
Use the `phd-assistant` skill.
Current PhD stage: mid-PhD, between studies and writing
Current artifact: scattered task list, chapter notes, and paper backlog
Immediate bottleneck: I feel lost and I cannot see what the next sensible milestone should be
Relevant constraints:
- Thesis format: paper-based thesis
- Current status: two papers drafted, one study delayed by data access
- Timeline: target defense in 12 months
- Supervisor expectation: bring a realistic plan to the next meeting
Support mode: Project orchestrator
Please:
1. Reconstruct the likely current stage and the missing artifacts.
2. Identify the main decision bottlenecks and risks.
3. Propose a short milestone plan for the next 4 to 6 weeks.
4. Separate what I should do alone from what I should discuss with my supervisor.
5. End with the smallest useful next action I can take this week.