Seven Jobs, One Overloaded “QA”
Most teams talk about “QA” as if it were one person, one role, one set of tasks. CTFL v4.0 Chapter 4 quietly destroys that illusion by breaking testing into a chain of activities: planning, monitoring and control, analysis, design, implementation, execution, and completion each with its own responsibilities, inputs, and outputs.
If you currently expect a single tester to “own QA” for a product or project, there is a high chance you are violating this structure, under‑documenting your testware, and leaving critical risks unowned.
What CTFL v4.0 Actually Says: Activities and Testware
Chapter 4 (and the related sections in Chapter 1) defines testing as a process made up of clearly named activities.
The core activities
- Test planning Defining the test approach, objectives, scope, resources, schedule, entry/exit criteria, and risk priorities.
- Test monitoring and control Tracking actual progress versus the plan, analysing deviations, and adjusting scope, priorities, or resources accordingly.
- Test analysis Studying the test basis (requirements, user stories, architecture) to identify testable features and derive test conditions answering “what to test?”.
- Test design Turning conditions into test cases, selecting techniques (e.g., equivalence partitioning, boundary values, decision tables, state transitions, exploratory approaches), and defining expected results.
- Test implementation Preparing test data, organising test suites, setting up environments, and making test cases executable (manual or automated).
- Test execution Running tests, logging results, reporting incidents/defects, and retesting and regression testing as needed.
- Test completion Consolidating results, evaluating coverage and residual risk, archiving testware, and feeding lessons learned back into the organisation.
Testware: the tangible outputs
CTFL v4.0 uses the term testware for all artefacts produced by these activities: test plans, test conditions, test cases, test procedures, test data, test scripts, and defect reports. Well‑run teams treat testware as first‑class assets, not disposable documents they are how the organisation remembers how it tested, why, and with what level of rigour.
Test Management vs Testing Not the Same Job
The syllabus explicitly distinguishes two principal roles in testing.
- Test management role Owns the process, team, and leadership of test activities, with focus on planning, monitoring and control, and completion. This is where decisions about risk, scope, resourcing, and reporting live.
- Testing role (testers) Focuses on analysis, design, implementation, and execution. This is where test conditions are derived, cases designed, environments prepared, and actual testing performed.
On real projects, developers, business analysts, and operations staff may participate in some of these activities, but CTFL’s message is clear: if nobody is explicitly owning test management and nobody is explicitly supported to do testing work properly, you do not have a testing system you have heroic individuals trying to plug structural gaps.
This is exactly the gap MJ Academy’s CTFL v4.0 programme is designed to expose and fix: giving testers, QA leads, and engineering managers a shared language for activities, artefacts, and roles, so “QA” becomes a designed function, not an overloaded job description.
Why This Matters for the “Smart Enterprise”
Building a “smart enterprise” is not just about adding AI; it is about making every critical process explicit, governed, and owned including testing.
AI tools raise the bar, they don’t remove the roles
MeJuvante’s ecosystem is built around AI‑ and no‑code‑driven workplaces, including the noCode AI Coding & Testing Workplace and AI Workplace Creator.
- The noCode AI Coding & Testing Workplace enables full‑stack build, test, and automation through ready apps, allowing teams to design and execute tests without writing code.
- The AI Workplace Creator lets teams define workflows, apps, and environments conversationally, and supports secure app design, testing, and deployment for distributed teams.
These platforms automate and accelerate test implementation and execution, but they do not replace:
- Test planning and risk decisions.
- Test analysis and design based on business context.
- Ownership of completion reporting and lessons learned.
In other words: AI and no‑code tools compress the “how” of execution, but humans still must own the “why,” “what,” and “when” decisions CTFL v4.0 describes.
How this shows up in MejuHire and AI workspaces
MeJuvante’s flagship AI hiring platform, MejuHire, is a good example.
- Test analysis and design determine which flows to test (resume parsing, JD matching, scoring, ranking, notifications, integrations) and what “correct behaviour” looks like for different markets and roles.
- noCode testing tools and AI workplaces then help implement and execute those tests quickly across different hiring workflows, data sets, and geographies.
In MeJuvante’s broader AI Workplace & IntelliWorks stack, the same pattern applies: structured roles and activities define the testing system; AI and no‑code tools make that system faster, more accessible, and easier to integrate into daily operations.
The smart enterprise doesn’t say “AI will do QA now.” It says: “We will keep the CTFL‑defined structure of test activities and roles, and use AI to amplify the people in those roles.”
How MJ Academy Turns CTFL v4.0 Chapter 4 into Practice
MJ Academy’s CTFL v4.0 programme (including the associated podcast series) is designed to move Chapter 4 from theory to operating model.
In the Chapter 4 session, participants:
- Map the seven test activities to their own projects and identify which ones currently have no clear owner.
- Catalogue their existing testware and see where critical artefacts (e.g., test conditions, data strategies, completion reports) are missing or fragile.
- Analyse how test management and testing roles are currently distributed across QA, developers, and business stakeholders and where reinforcement is needed.
- Explore how tools like MeJuvante’s noCode AI Testing Workplace can automate implementation and execution without undermining the role of human judgment in planning, analysis, and design.
The outcome is not just exam readiness; it is a concrete plan to re‑shape your testing function to match CTFL v4.0’s structure.
Design a Testing System, Not a Hero Role
If you are serious about “smart enterprise” initiatives, AI adoption, or simply reducing production incidents, you cannot afford to treat QA as a single overloaded role. You need a designed testing system: defined activities, owned roles, and durable testware.
Three practical next steps:
- Map CTFL’s activities to your current reality Identify who actually does planning, monitoring, analysis, design, implementation, execution, and completion today and where there are gaps or overloads.
- Make testware a first‑class asset Standardise how your team creates, stores, and maintains test plans, conditions, cases, data, scripts, and reports, especially as you adopt no‑code and AI‑assisted tools.
- Equip your people with CTFL v4.0 and modern tools Enrol your QA leads and future quality engineers into MJ Academy’s CTFL v4.0 programme and give them access to platforms like MeJuvante’s noCode AI Testing Workplace so they can implement what they learn in real projects.
➡ DM us “CTFL CHAPTER 4” to receive MJ Academy’s full slide deck on test activities, testware, and test roles a free resource you can use to audit your current testing setup and start redesigning it for the smart enterprise era.