top of page
Search

Code Curation

  • Glenn Atter
  • Nov 24, 2025
  • 10 min read

1. Introduction

With any software project there is a point where a system is not actively being developed. A term that I have heard quite a lot is - ‘it is put into maintenance mode’.

This transition often signifies that the software has reached maturity or feature complete but in most cases it indicates that resources are being redirected elsewhere. However, it does not mean that work will no longer be carried out, there may be cases where updates are performed but only as the result of a bug or security issue. Essentially, work is carried out in reaction to issues.


Whilst this approach sustains the basic functionality in the short term, it frequently leads to a decline in the software. Third party dependencies and libraries, which form the backbone of a modern application, evolve rapidly. Frameworks such as Angular, or hosting systems like Kubernetes reach end-of-support just 18 months after release, leaving unmaintained codebases that are vulnerable to obsolescence, security risks and escalating technical debt. This can result in significant costs in time and resources to bring an outdated system up to date.


A proactive alternative to traditional maintenance is what has been given the term ‘Code Curation’ it emphasizes strategic oversight, dependency vetting, and continuous refinement to preserve codebase health and longevity. In this article, we'll explore the pitfalls of relying solely on maintenance mode, contrast it with the benefits of curation, and weigh the pros and cons to help you decide the best path for your projects. By shifting from reaction to prevention, curation offers a sustainable strategy in an era of accelerating software evolution.


2. Core Elements of Code Curation

2.1. Dependency Analysis

One of the first steps in moving away from pure maintenance is understanding what dependencies are being used and their associated lifecycles. This is a critical step even if you're not fully transitioning to code curation—any company providing a software system should know at least the following information about the dependencies in use. Gathering this data forms the basis for proactive decision-making, allowing teams to identify risks early and curate a healthier codebase rather than reacting to breakdowns.


For each dependency, track these key details:

  • Name: The name of the third-party library or package.

  • Description: What it is used for within your application.

  • Version: The specific version currently in use.

  • End-of-Support Date: When official support for this version ends, after which no security patches or updates will be provided (e.g., Angular versions typically reach this point about 18 months after release).

  • Last Updated Date: When the library was last updated or patched, indicating if it's actively maintained or potentially abandoned.

  • Known Vulnerabilities: Any reported security issues (e.g., CVEs) that could affect your version, sourced from vulnerability databases.

  • License: The licensing terms (e.g., open-source vs. proprietary), to flag potential compliance or redistribution risks.

  • Impact: The potential severity if this dependency fails—e.g., a date formatter might have low impact if it's isolated, while a core database driver could disrupt the entire product.

  • Risk: An overall assessment based on the above factors (e.g., likelihood × impact), which evolves over time. A recently updated, actively supported library with low impact would score as low risk, whereas an outdated one with known vulnerabilities would be high risk.


  • Review Date: The last time this dependency was reviewed.


By compiling this information—ideally into a centralised Software Bill of Materials (SBOM)—teams can shift from reactive maintenance to strategic curation, prioritising updates, replacements, or removals to mitigate long-term decline.


Dependencies are not just third party libraries, they cover everything associated with providing the product to the end-user. Due to this it must include aspects of the hosting environment etc.


This analysis must be done on a regular basis as this information will change over time. To reduce the amount of time and effort required we can take the approach that High risk items need to be checked monthly whilst low risk items need to be checked 6 monthly.


The insight brought by this analysis is not only useful in this context but important with any development project; having the knowledge of when dependencies need replacing needs to be part of any project.


Implementation Tips An important aspect of the dependency analysis is to ensure that this is a team exercise. Everyone should be involved in understanding the impact that dependencies have on a product.


2.2. Regular Code Analysis

While many systems exist for automated dependency analysis, they are often configured to run only as part of a CI/CD pipeline—and typically only when code changes are detected. This works well for active development but falls short in maintenance mode: If the codebase remains unchanged, emerging issues like new vulnerabilities in dependencies might go unnoticed for months or years.


A cornerstone of code curation is proactivity. To embody this, implement automated code analysis on a regular schedule, regardless of whether code has been modified. This not only feeds into ongoing dependency analysis (e.g., updating end-of-support dates or risk scores) but also surfaces security issues, such as newly disclosed CVEs, before they escalate into breaches. For instance, a dependency that was low-risk during the last CI/CD run could become high-risk if a zero-day exploit is published.


By scheduling these analyses, perhaps weekly or monthly, teams can maintain a dynamic view of the codebase's health, preventing the accumulation of technical debt and aligning with curation's goal of long-term sustainability.


Implementation Tips:


  • Scheduling Mechanisms: Use cron jobs, GitHub Actions workflows, or serverless options like AWS Lambda to trigger scans periodically. For example, set up a GitHub Action to run Snyk scans every Friday, even on dormant repositories.


  • Tools for Regular Analysis: Leverage extensions of the automation tools mentioned earlier:

    • Snyk or OWASP Dependency-Check for vulnerability alerts on fixed intervals.

    • SonarQube's scheduled scans for broader code quality metrics, including dependency health.

    • Emerging AI tools like GitHub Advanced Security (with Copilot integrations) for predictive risk analysis.


  • Best Practices: Configure thresholds to minimize noise (e.g., alert only on high-severity issues), integrate with notification systems like Slack, and review results quarterly to refine your curation strategy.


This proactive rhythm transforms analysis from a reactive checkpoint into a curation habit, ensuring your software remains secure and viable without constant manual intervention.


2.3. Planning Work Based on Dependency Analysis Insights

With a comprehensive dependency analysis in hand, complete with details like end-of-support dates, risk scores, and impact assessments, the next logical step in code curation is to translate those insights into actionable work plans. This proactive planning bridges the gap between awareness and execution, ensuring that dependency-related tasks (e.g., upgrades, replacements, or removals) are scheduled strategically rather than handled reactively during crises. By incorporating analysis data into your planning, you can prioritize high-risk items, allocate resources efficiently, and align updates with your project's lifecycle, ultimately extending the software's viability and reducing the "inevitable decline" associated with pure maintenance mode.


This process draws from established best practices in dependency management, where planning isn't just about fixing issues but about optimizing the codebase for long-term health. For example, if your analysis reveals an Angular dependency nearing its 18-month EOS, planning might involve scheduling an upgrade sprint before vulnerabilities accumulate.


Key Steps for Planning Dependency Work

Leverage your analysis to create a structured plan. Here's a step-by-step approach, informed by industry strategies:


  • Prioritize Based on Risk and Impact: Sort dependencies by their calculated risk scores from your analysis. Focus first on high-risk items—those with known vulnerabilities, impending EOS dates, or broad impact (e.g., core libraries affecting multiple modules). Use a scoring system (e.g., high/medium/low) to triage: Security patches get immediate attention, while low-impact updates can be batched.

  • Develop a Scheduling Roadmap: Establish a regular update cadence tailored to your analysis. For instance:

    • Weekly/Bi-Weekly Reviews: Scan for critical security updates using automated alerts.

    • Monthly/Quarterly Deep Dives: Tackle medium-risk upgrades, grouping compatible changes to minimize disruption.

    • Annual Overhauls: Plan major migrations for high-effort items, like switching from an abandoned library.

    Integrate this into agile workflows, such as dedicating backlog items or sprints to curation tasks. Tools like Jira or Trello can help visualize the roadmap with timelines tied to EOS dates.

  • Incorporate Testing and Rollback Plans: For each planned update, outline testing protocols (e.g., unit tests for compatibility) and contingency measures. Analysis insights can guide this—e.g., if a dependency has a history of breaking changes, allocate extra buffer time. Automate regression testing where possible to ensure upgrades don't introduce new issues.

  • Monitor and Iterate: Post-update, re-run your analysis to verify improvements and adjust the plan. Track metrics like reduced vulnerability counts or faster build times to demonstrate progress, which can reinforce management buy-in.


Challenges and Mitigation Strategies

Planning isn't foolproof—challenges like breaking changes or resource constraints can arise. Mitigate by starting with low-risk pilots, involving the team in estimation sessions, and using automation (e.g., Dependabot for auto-PR scheduling) to handle routine updates. If an upgrade proves too complex, add it to a "hard updates" backlog with dedicated time slots, preventing backlog buildup.

By embedding dependency analysis into work planning, code curation becomes a deliberate, data-driven process that not only maintains but enhances your software's resilience and adaptability.Is there a better option?


3. Moving towards Code Curation

3.1. Planning the Transition

Shifting from a reactive maintenance mode to proactive code curation requires careful planning to ensure a smooth rollout, minimize disruptions, and maximize long-term benefits. This phase involves assessing your current codebase, defining objectives, and creating a roadmap that aligns with your team's capabilities and project needs. Without a solid plan, attempts at curation can fizzle out, leading back to the familiar pitfalls of outdated dependencies and escalating technical debt.

Think of it as designing a blueprint: It sets the foundation for sustainable practices like regular refactoring, dependency vetting, and documentation updates.


Key Steps in Planning

Start by conducting a thorough audit to baseline your codebase, then outline actionable steps. Here's a structured approach based on established best practices:

  • Assess the Current State: Build on your dependency analysis by evaluating overall code health. Use tools like SonarQube for metrics on code smells, duplication, and complexity. Identify pain points, such as legacy sections prone to bugs or high-maintenance dependencies (e.g., those nearing EOS like Angular versions). Involve the team in this via workshops to gather insights on what works and what doesn't.

  • Define Goals and Scope: Set clear, measurable objectives, e.g., reduce vulnerability exposure by 50% in six months or prune unused dependencies quarterly. Prioritize based on risk: Focus first on high-impact areas like security-critical code. Decide on scope (e.g., start with one module or go project-wide) to avoid overwhelming the team.

  • Develop a Roadmap: Create a phased timeline, such as:

    • Phase 1 (1-2 months): Implement automated tools and initial audits.

    • Phase 2 (3-6 months): Integrate curation into workflows, like adding refactoring sprints.

    • Phase 3 (Ongoing): Establish review cycles and metrics tracking. Incorporate best practices like adopting coding standards (e.g., consistent naming and modular design) and emphasizing documentation to make code curation-ready.

  • Resource Allocation: Identify required skills, tools, and training. For example, allocate time in sprints for curation tasks (e.g., 20% of developer bandwidth) and budget for tools like Snyk or Renovate.

Potential Challenges and Mitigations

Planning isn't without hurdles—resistance to change or underestimating effort can derail progress. Mitigate by starting small (pilot on a single repo), involving cross-functional teams early, and using agile methodologies to iterate on the plan.

Regular check-ins ensure the plan adapts to real-world feedback, turning curation into a habitual practice rather than a one-off initiative.

Effective planning transforms curation from an abstract concept into a tangible strategy, paving the way for reduced maintenance overhead and a more resilient codebase.

3.2. Securing Management Buy-In

Gaining management buy-in is often the make-or-break factor in transitioning to code curation. Executives may view proactive efforts as "nice-to-haves" amid competing priorities, especially if the codebase is "working fine" in maintenance mode. However, by framing curation as a strategic investment that delivers measurable ROI, such as cost savings, risk reduction, and improved efficiency, you can align it with business goals like faster time-to-market and compliance.

Drawing parallels from maintenance management in other fields (e.g., CMMS systems), the key is to demonstrate value through data and storytelling.

Strategies for Building Support

Approach this like pitching a business case: Focus on quantifiable benefits while addressing concerns like upfront costs.

  • Highlight ROI and Cost Savings: Present data showing how curation prevents expensive overhauls. For example, reactive maintenance can cost 3-10x more than proactive approaches due to downtime and emergency fixes; curate evidence from your dependency analysis (e.g., potential vulnerabilities in outdated libraries like Angular). Estimate savings: Reducing technical debt could cut development time by 20-30%, freeing resources for innovation.

  • Emphasise Risk Mitigation: Stress security and compliance—e.g., uncurated code increases breach risks, as seen in supply chain attacks. Tie this to business impacts like regulatory fines or reputational damage, using examples from audits to show "before" vs. "after" scenarios.

  • Communicate Benefits Early and Often: Use presentations or demos to showcase quick wins, like automated scans revealing hidden issues. Involve stakeholders in planning discussions to address their priorities (e.g., scalability for growth). Frame it as enabling agility: Curated code means faster feature delivery without legacy drag.

  • Secure Resources and Incentives: Propose phased funding tied to milestones, and suggest incentives like recognising team efforts in curation to boost adoption.

Overcoming Common Objections

Address pushback head-on: For "It's too expensive," counter with long-term savings data; for "We're too busy," highlight how curation reduces future firefighting. Pilot programs can prove value with minimal risk, building momentum for full buy-in.

With management on board, curation gains the backing needed to thrive, evolving your software from merely maintained to strategically optimised.

4. Summary

In an era where software lifecycles are accelerating and dependencies like Angular become unsupported in as little as 18 months, relying solely on reactive maintenance mode invites technical debt, security vulnerabilities, and eventual obsolescence. This article has explored code curation as a superior, proactive alternative—emphasizing strategic dependency management, continuous refinement, and prevention over mere fixes.

Key takeaways include:

  • Understanding the Shift: Maintenance sustains the status quo reactively, while curation optimizes for longevity through vetted, streamlined codebases.

  • Foundational Steps: Begin with dependency analysis to catalog names, versions, EOS dates, risks, and impacts, forming a Software Bill of Materials (SBOM).

  • Automation and Regularity: Leverage tools like Snyk, Dependabot, and SonarQube for automated scans, extending to scheduled analyses even on unchanged code to catch emerging issues.

  • Planning and Execution: Develop a transition roadmap, secure management buy-in by highlighting ROI and risk mitigation, and plan work around analysis insights—prioritizing high-risk updates with testing and iteration.

  • Pros and Cons: Curation reduces long-term costs and enhances security but demands upfront investment; maintenance is simpler initially but leads to higher reactive expenses.

By adopting curation, teams can transform "maintenance mode" from a decline signal into a foundation for resilient, evolvable software. Whether starting small with a pilot or scaling enterprise-wide, the investment pays dividends in efficiency, innovation, and peace of mind. As software ecosystems evolve, curation isn't just best practice—it's essential for staying ahead.

5. Final Thoughts

There are always risks with software that is not constantly maintained, this approach tries to mitigate these risks by performing regular tasks that should not be too onerous. Recently, the npm repository has started to remove components that are considered a high security risk. You definitely do not want to be in a position where your software product no longer builds at a point where you need to perform an update.I have found aspects of this incredibly useful for projects that are being actively maintained. Just knowing deadlines for implementing upgrades is invaluable in planning.


 
 
 

Recent Posts

See All
Data Protection

Data protection doesn’t have to be scary. Strip it back to What, Where, Why, and How — and build a simple, audit-proof system that works for real companies.

 
 
 

Comments


bottom of page