The declarative framework that replaces brittle scripts with reliable, component-based AI workflows
Frontier LLMs are already smart enough to complete most knowledge work tasks. Yet, a recent MIT study found that 95% of enterprise AI pilots fail to deliver ROI. The bottleneck isn't model intelligence; it's orchestration. What's missing is a robust framework and a declarative philosophy to turn AI's potential into reliable, production-grade systems.
@step # steps are leaf components in the execution DAG
def extract_user_stories():
transcript = input("transcript")
return Prompt( # Prompts are auto-compiled into XML
system="You're a senior Business Analyst.",
instructions="""
Carefully read the following meeting transcript:
{transcript}
Extract all user stories described or implied.
""",
output_format="As a user, I want to..."
)
@step
def clean_user_stories(stories):
prior_call = use_context() # use a hook to get context
return Prompt(
system="You're a senior QA engineer.",
instructions="""
Review the extracted user stories:
{stories}
Organize the stories into a numberd list.
""",
context=prior_call
)
@component # Components are nestable and composable.
def user_story_pipeline():
# Example pipeline using two steps
raw = extract_user_stories()
return clean_user_stories(raw)
@app # app is the entry point
def main():
return user_story_pipeline()