Can project managers orchestrate AI? Drawing parallels with LLM Development

Language, once the sole domain of human thought, now flows from silicon minds. But these minds need direction - a project manager’s vision. This article explores how to conduct this symphony of code and creation.
Individual employee traits: Understanding LLM parameters for project alignment
For project managers tasked with integrating Large Language Models (LLMs) into their workflows, understanding the nuances of these model’s parameters is crucial. Think of these parameters as akin to the inherent traits and behavioural controls of a human employee.
Temperature (creativity vs. reliability): Project managers must balance creativity and accuracy. A higher temperature might be beneficial for brainstorming sessions or generating diverse content, similar to an employee with a highly creative, “out-of-the-box” mindset. However, it increases the risk of unreliable outputs. Lower temperatures ensure consistent, fact-based responses, vital for tasks requiring precision, such as report generation or data analysis, much like a highly detail-oriented, reliable employee. Project managers should consider the risk tolerance of the project and its stakeholders when adjusting this parameter, just as they would when assigning tasks based on an employee’s strengths.
Top-k and top-p sampling (focus and relevance): These parameters are essential for maintaining coherence and relevance. They help ensure the LLM stays focused on the project’s objectives, reducing the likelihood of tangential or irrelevant outputs. Consider this like an employee who can stay on task and filter out distractions. For PMPs, this translates to improved efficiency and reduced rework, as the LLM’s outputs are more likely to align with project requirements.
Frequency and presence penalties (avoiding repetition): Preventing repetitive outputs is critical for maintaining the quality of generated content. This is analogous to an employee who avoids repeating themselves and brings new insight to the table. PMPs can leverage these parameters to ensure that LLM-generated reports, summaries and analyses are fresh and engaging.
Maximum length and stop sequences (defining boundaries): These parameters provide essential control over output length and termination. This is comparable to setting clear deadlines and expectations for an employee’s deliverables. For PMPs, this is vital for managing project timelines and ensuring that LLM-generated deliverables meet specific length requirements. They are also important for the automation of report generation, where reports must be concise.
Log probabilities (decision-making insight): The log probabilities parameter provides insights into an LLM’s decision-making process by returning the log probabilities of the top-k token choices at each step, thereby enhancing model transparency and allowing for detailed analysis of token-level confidence. In project management, this functionality parallels established practices: log probabilities offer quantifiable confidence levels, mirroring PMP risk assessments and quality control metrics, and they provide a transparent audit trail of LLM decisions, vital for compliance and stakeholder communication, much like the detailed project documentation PMPs maintain.
From configuration to creation: The structured development of LLMs (organisational analogies)
Having explored the individual “traits” of LLM parameters, let’s now consider how these “individuals” come together to tackle a project. Just as a project is broken down into distinct phases, LLM development follows a series of critical stages-
Data preparation: Defining the project scope (hiring and resource allocation): This initial stage aligns with the project initiation phase, where project scope and objectives are defined. For PMPs, this translates to identifying the specific data requirements for the LLM, ensuring data accuracy, and establishing clear project goals, much like defining the roles and resources needed for a new project team.
Pre-training: Building the project foundation (company-wide training): This stage mirrors the project planning phase, where the project team lays the groundwork for subsequent development. Here, the LLM learns the foundational linguistic structures, akin to a new hire receiving company-wide training and learning the basics of the firm’s operations.
Fine-tuning: Executing and controlling the project (specific skill training): This stage aligns with the project execution and control phases, where the project team refines the LLM for specific tasks. PMPs can apply their expertise in resource management and quality control to ensure that the LLM meets project requirements, much like training an employee on the specific skills needed for a task.
Reinforcement learning from human feedback (RLHF): Monitoring and evaluation (performance reviews): This stage corresponds to the project monitoring and evaluation phase, where the project team assesses the project’s performance and makes necessary adjustments. For PMPs, this involves establishing feedback loops, tracking performance metrics and ensuring that the LLM aligns with stakeholder expectations, similar to performance reviews for employees.
Business insights: Parallels between agile and LLM development
The Agile Incremental Model and the process of building a Large Language Model (LLM) both emphasise iterative development, continuous improvement and user feedback. In Agile, the first increment involves requirement analysis and planning, where user needs are identified, and development strategies are outlined. This aligns with the data preparation stage in LLM development, where massive datasets are collected, cleaned and organised to ensure a solid foundation for training. The second increment in Agile focuses on initial development and prototyping, similar to the pre-training phase of LLMs, where the model learns linguistic structures and patterns through unsupervised learning.
As Agile progresses, subsequent increments enhance the software with additional features and refinements based on stakeholder feedback. This mirrors the fine-tuning stage in LLM development, where the model undergoes further training on smaller, curated datasets to improve its performance for specific tasks or domains. Finally, Agile includes regular testing and feedback loops, much like the Reinforcement Learning from Human Feedback (RLHF) stage in LLMs, where human reviewers assess responses and provide corrections to align the model’s output with user expectations. In Agile, each increment provides tangible rewards, such as working features or user satisfaction, which drive further development. Similarly, RLHF rewards the LLM by reinforcing desired behaviors through human feedback, gradually refining its responses.
Conclusion
Imagine a team that learns, adapts and generates with near-human fluency. This is the promise of LLMs, a promise that demands a project manager’s touch.
0 comments
Log in to post a comment, or create an account if you don't have one already.