Implementation is increasingly recognized as an inseparable incorporation stage in the incorporation of health technologies, bridging the gap between theoretical frameworks and practical application. It is inherently a complex, interdisciplinary, and multi-level process that requires careful consideration of contextual factors, clear identification of stakeholders, and proactive strategies to ensure the technology reaches its intended users smoothly. This must account for size and management of services and implementation contexts, as well as the specific technical and operational requirements of the technology itself [68].
In this scenario, implementing technologies within the SUS is critical to ensuring broad and equitable access to healthcare. In this study, which compiled implementation experiences within the SUS, we observed that performance indicators and models used still vary considerably. This heterogeneity complicates the comparison of studies and, consequently, hinders a comprehensive understanding of the factors that promote or limit the full integration of new technologies in the country.
The results indicate that Brazilian implementation research predominantly focuses on local health practices and programs, using implementation frameworks mainly to assess the effectiveness of the processes involved. However, two significant gaps emerged: (i) insufficient integration between the implementation process and the stages of technology incorporation into the SUS and (ii) discontinuities across the essential phases of the implementation process—planning, execution, and evaluation. The latter is frequently undermined by limited scope and quantity of outcomes analysed.
The predominance of implementation studies focused on clinical policies and practices, together with the lack of research into the implementation of “hard” or “soft” technologies, contrasts with the growing demand for highly complex medicines, equipment, and devices assessed annually by the National Commission for the Incorporation of Technologies into the Unified Health System (Conitec) and incorporated by the Ministry of Health into the SUS [69]. Although the process of incorporating technologies follows a legal rite with criteria and deadlines clearly defined in Brazilian legislation, the implementation phase is often disconnected from this process [11, 12]. This disconnection results in significant gaps, especially with delays in making technologies available to the population or less desirable benefits, increasing inequalities in access to healthcare [70].
Clinical practices and models of care were frequently the focus of the implementation efforts identified in this review. These interventions play a critical role in strengthening health systems, especially in contexts where improvements in service delivery rely more on optimizing care processes than on the introduction of new equipment or pharmaceuticals. However, they are rarely accompanied by predefined indicators related to cost-effectiveness, financing, or long-term sustainability. This gap hinders the ability to evaluate their economic impact and limits the potential for their scale-up and institutionalization within the Brazilian Unified Health System (SUS).
In addition, there is a clear lack of strategic direction and harmonization at the federal level for the implementation processes in Brazil [67]. This highlights the pressing need for frameworks tailored to address persistent systemic challenges—such as insufficient public infrastructure, fragmented planning, precarious organization, inadequate maintenance of regionalized networks, and chronic underfunding—while accommodating Brazil’s unique social, economic, and geographic complexities [9].
In response to this need, recent progress has been made through the development of a structured approach designed to support the standardization of implementation processes within SUS. Built upon multiple international implementation science frameworks—such as CFIR, RE-AIM, and others—the approach named ImplementaSUS has been specifically adapted to reflect the principles, norms, and operational dynamics of the Brazilian health system [71]. This model is currently undergoing validation and aims to guide implementation efforts across diverse regional contexts, promote the generation of relevant indicators (including those related to cost-effectiveness and sustainability), and enhance the scale-up and institutionalization of innovations. By offering a context-sensitive and evidence-informed pathway for implementation, ImplementaSUS seeks to bridge the persistent gap between technology incorporation and effective service delivery within the SUS.
Our analysis revealed that many studies do not employ implementation science frameworks comprehensively [30,31,32, 34, 38, 42, 43, 45, 47, 49, 55, 62, 63, 66,67,68,69,70], and often lack detailed methodological descriptions, especially regarding planning stages [32, 34, 43, 55, 62, 66, 67]. This deficit undermines the identification of barriers and facilitators, the tailoring of implementation strategies, and adapting processes to the specific needs of each local needs [66, 67]. Furthermore, the limited presentation and measurement of implementation outcomes constrains the ability to assess implementation effectiveness comprehensively.
The outcomes such as acceptability, adoption, adequacy, feasibility, fidelity, cost of implementation, penetration, and sustainability provide valuable parameters to track the consolidation of new technologies over time [70, 71]. However, these outcomes are inconsistently defined and measured across studies. Recent systematic reviews of the described health policy implementation outcomes have shown that acceptability is the most commonly assessed outcome [68, 69] Despite this, only 13.3% of the studies reviewed included’acceptability.’Nevertheless, in the studies where it was examined, acceptability emerged as a crucial parameter for understanding the implementation process. It goes beyond individual perceptions or cognitions, offering strategic perspectives and informing critical interventions to enhance implementation success. This may indicate that acceptability studies should be explored more frequently in the implementation process.
‘Adequacy’ was measured in several studies that used the Plan-Do-Check-Act (PDCA) cycle as a methodological framework. PDCA is a structured, iterative approach aimed at continuous improvement through a cycle of planning, execution, result verification, and adjustments [70, 71]. In the studies analyzed, the cycle was applied to assess the initial context through baseline audits, propose interventions guided by quality improvement criteria, and monitor results in follow-up audits. These studies predominantly adopted the JBI framework, which contributed to structuring implementation strategies, and showing positive results in the implementation process. The increase in compliance with best practices suggests the potential of this model, which is grounded in three core principles: understanding organizational culture, empowering individuals and organizational systems, and supporting, reinforcing, and sustaining infrastructure [72]. However, it is important to note that most of these studies focused on limited-scope scenarios with short follow-up periods, which hampered an assessment of the sustainability of the adopted practices.
Only one study applied the JBI framework to the implementation of a hard technology in a large Brazilian state (Minas Gerais) [59], highlighting the limited experience accumulated so far with this approach in large-scale implementation within the SUS. Although the JBI method is capable of measuring aspects of intervention sustainability, none of the projects completed more than one evaluation cycle, and the periods between audits were typically less than six months. This limitation restricts the ability to draw conclusions about the long-term impact of the interventions [25, 33, 41, 44, 46, 48, 50,51,52,53, 56, 57, 59,60,61]. One study, however, adopted a prospective longitudinal design with the four cycles proposed by the PDCA tool and attributed the success of the results to key factors in the implementation: continuous support from the administrative and clinical leadership and motivated frontline staff [47].
The application of theoretical-logical models in projects investigating the fidelity outcome underscores the importance of these tools in program evaluation, ensuring that interventions are grounded in solid logic and consistent evidence. However, a key issue still lacking consensus in the evaluation of this outcome is how to assign appropriate weights and standards to estimate the degree of implementation [23, 30, 32, 34, 38, 45, 49, 63, 66]. Although the authors have used consensus techniques to score the components and sub-components of the models to increase the credibility of the analysis, the absence of reference values continues to create uncertainty in the calculation, complicating the comparison and interpretation of results.
Adoption, feasibility, cost, and sustainability were underexplored in the included studies. However, these outcomes are closely related to the feasibility of implementing and maintaining the proposed practices. The scarcity of cost and sustainability analyses raises concerns about the planning of implementation projects, particularly regarding the complexity of the health system. Implementation models for the SUS must consider the high turnover of teams and other local challenges, proposing strategies that guarantee the sustainability of the practices implemented. Moreover, neglecting cost assessments can compromise the scalability and long-term adoption of interventions, as financial viability is essential for decision-making at both local and national levels. Future implementation research should incorporate economic evaluations to strengthen the evidence base and support strategic planning.
While one-off studies are vital for piloting interventions and modelling scalable practices, they do not guarantee sustainability across contexts over time [9]. Therefore, investing in the principles that underpin these methods must be an ongoing effort by organizations, which need to recognize that implementation processes are dynamic, diverse, and always adaptable to meet local needs. It is the responsibility of implementation teams to develop the expertise to map usage scenarios, identify barriers, and propose interventions aligned with local demands.
In this review, we observed a clear difficulty in classifying the outcomes presented in the studies analyzed according to the definitions of Proctor et al. [18]. Consistent with previous reviews [68, 69], a wide variety of terms lacking clear definitions were used, reflecting the nascent stage of implementation research, where outcome standardization remains in progress. Although recent efforts have been made to harmonize outcome definitions [73], there is considerable work to be done before studies generate comparable, groupable results and robust evidence to guide the process.
Other limitations of this review include the search methods used in the databases and the heterogeneity of implementation study designs. These studies often have poorly standardized indexing, making it challenging to identify them using typical strategies. Additionally, the results or impacts of the outcomes described were not assessed, as the primary goal of the review was on methodological issues rather than conducting a detailed synthesis of the data. Consequently, the measurement of the outcomes and assessment of study quality were not included, as such analyses could introduce distortions without providing additional relevant information to the descriptive analysis presented in this article.