Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/AllianceBioversityCIAT/alliance-research-indicators-client/llms.txt

Use this file to discover all available pages before exploring further.

A research result is a structured record of a research outcome produced within the Alliance of Bioversity International and CIAT (CGIAR ecosystem). Each result is linked to exactly one indicator, enriched with up to eleven categories of metadata, and tracked through a publication lifecycle — from the moment a researcher creates it to the point it is validated and made available across federated platforms. Results are the atomic unit everything else in Alliance Research Indicators is built around: dashboards aggregate them, searches retrieve them, and cross-platform federation links them to their counterparts in STAR, TIP, PRMS, and AICCRA.

Result identity

Every result has two identity fields that together form its cross-platform key:
FieldDescription
platform_codeA short code identifying which platform owns this record (e.g., the Alliance platform).
result_official_codeA human-readable sequential code assigned when the result is created (e.g., AR-2024-0042).
The combination (platform_code, result_official_code) must be unique across the entire federation. Attempting to create a result that collides with an existing record returns an HTTP 409 Conflict response. Rather than retrying, the application surfaces a link-to-existing flow so the reporter can attach their work to the already-existing record instead of creating a duplicate.
The 409 conflict flow is intentional: it prevents duplicated records across STAR, TIP, PRMS, and AICCRA and keeps the federation coherent. If you see a “duplicate detected” prompt, use the link option to associate your result with the existing record.

Result lifecycle

A result moves through a defined sequence of states. Each transition is server-authoritative; the client reflects the current status_id and shows only the actions permitted at that stage.
create → edit (draft) → submit → MEL review → accepted / returned

                                                  published
StateWho actsWhat happens
DraftResearcherMetadata tabs can be edited freely; partial saves are allowed at any point.
SubmittedResearcher (triggers)Pre-submission validation runs client-side; if it passes, PATCH /results/:id/submit is called. The result is locked for the reporter.
Under reviewMEL Regional ExpertThe MEL expert can edit, annotate, accept, or return the result.
Accepted / PublishedMEL Regional ExpertResult is visible in dashboards, search, and federation views.
ReturnedMEL Regional ExpertResult goes back to draft with structured feedback; reporter can address it and resubmit.

Versioning

Results are versioned server-side. The version_id field on the result record increments whenever the backend records a state transition. The client includes a ?version=N query parameter on result detail requests and displays a stale-version prompt when the loaded version does not match the current server version. This prevents silent overwrites when two users are viewing the same result.
If you see a “this record has been updated — reload to continue” banner, reload before editing. Saving over a stale version will be rejected by the backend.

Metadata tabs

Each result is composed of eleven metadata sections. All eleven are always present in the editing interface, but which fields within each section are required depends on the result’s indicator type. Saving one tab does not require other tabs to be complete — partial saves are always permitted.
Core descriptive fields: title, description, reporting year, start and end dates, primary language, keywords. Also captures the primary project contract link and the assigned indicator. CLARISA-sourced language list; no free-text language entry.
Alignment to the Alliance’s strategic levers and CGIAR impact areas. Both lists are sourced from CLARISA. Also captures SDG targets relevant to the result. Required for all indicator types to ensure results roll up correctly in aggregate dashboards.
Institutions involved in producing or benefiting from this result. Institution search resolves against the CLARISA institution registry — no free-text institution entry is permitted. Each partner carries a role (e.g., implementer, funder, beneficiary) drawn from the CLARISA institution-role list.
Files, URLs, and other documentation supporting the result. Files are uploaded to the file-manager microservice first; the returned stable URL is attached to the result record. Evidence files survive result edits and version transitions. Supported formats include PDF, images, and datasets.
Outcome Impact Case Report narrative fields. Captures the change story: what changed, for whom, how the Alliance contributed, and the evidence of significance. Required for results mapped to the OICR indicator type. Supports OICR template download.
Intellectual property ownership and licensing metadata. Records IP owners (resolved against CLARISA institutions) and the applicable license type. Relevant for results involving innovations, tools, or datasets with IP implications.
Training, mentoring, and knowledge-transfer event metadata. Captures session type, delivery modality, session format, dates, participant counts by gender (female / male / non-binary / total), trainee affiliation, and supervisor. All controlled lists (session type, delivery modality, language) are sourced from CLARISA. Center Admins can bulk-upload capacity sharing records via a structured template.
Policy engagement and change metadata: the type of policy change, the stage in the policy process, the geographic scope of the policy, and the organizations involved. Required for results mapped to the policy-change indicator type.
Innovation development and scaling metadata: innovation type, readiness level (maturity level), scaling stage, and the organizations that contributed. Required for results mapped to innovation indicator types. Maturity levels are sourced from the /maturity-levels endpoint.
The geographic extent of the result’s reach or applicability. Scope levels range from global → regional → national → subnational → site-specific. Country and region selections resolve against CLARISA; subnational selections resolve against the subnational-by-ISO-alpha endpoint. No free-text geography entry is permitted.

Federation

Results in Alliance Research Indicators are part of a broader multi-platform federation. The client can link to result counterparts in STAR, TIP, PRMS, and AICCRA using deep-link URLs supplied by the main API — but it does not write back to those platforms. Federation is always read-and-link-only from this client.

Creating results

Step-by-step guide to creating a new result, choosing an indicator, and navigating the tabs.

Result metadata reference

Field-by-field reference for every metadata tab, including which fields are CLARISA-controlled and which are indicator-specific.

Build docs developers (and LLMs) love