Ecosystem
The Ecosystem is how your compiled spec connects to the real world at deploy time and change time. It includes provider supply, discovery, and trust governance, plus control-plane orchestration and artifact generation workflows.
Why this exists
Your spec declares what capabilities it needs, not which vendor implements them. The domain section calls collections.write, ids.generate, and auth.sign_in by port name. The spec never mentions Supabase, PostgreSQL, or Redis.
This is not an accident. When business logic is coupled to vendor SDKs, swapping a database or auth provider requires changing application code throughout. The Ecosystem makes the "swap providers without touching your application code" guarantee concrete and enforceable.
Three workflows in one pillar
The Ecosystem is one pillar with three developer workflows.
Bind providers
You have a compiled spec. It has a CapabilityBindingRequirements artifact that lists every port your application needs. You need to connect those requirements to real implementations.
1. Discover what providers exist for your requirements
2. Check eligibility for specific providers
3. Resolve providers and produce a lockfile
4. Commit the lockfile to version control
Capability Binding: how requirements become a lockfile.
Orchestrate capability changes
After providers are bound, teams still need deterministic outputs for setup, generation, and drift detection. The control-plane workflow uses compiled and bound artifacts to produce typed report, plan, apply, drift, and generate outputs.
1. Validate adapter support and contract coverage
2. Compile a deterministic capability resource plan
3. Generate integration artifacts for requested surfaces/targets
4. Optionally execute apply
5. Detect drift in advisory or strict mode
Control Plane: how plan/apply/drift/generate workflows fit into the Ecosystem.
Author providers
You have a service, an API, or a data source. You want to make it available as a Gooi provider so other applications can bind to it.
1. Implement capability ports using the connector SDK
2. Declare a provider manifest with port coverage and hashes
3. Publish the package
4. (Optional) submit for marketplace certification
Building a Provider: how to implement and publish a provider.
First-party providers
These providers are owned and maintained by the Gooi team. They are the recommended starting point.
| Package | Ports implemented | When to use |
|---|---|---|
@gooi-marketplace/memory | All ports | Development and testing only |
@gooi-marketplace/supabase | auth.sign_in, auth.sign_out, auth.refresh, auth.validate_principal | Supabase auth for web/HTTP surfaces |
@gooi-marketplace/credential-store-cookie | credential_store | Web surface, browser-side tokens |
@gooi-marketplace/credential-store-local-storage | credential_store | Web surface, persistent browser tokens |
@gooi-marketplace/credential-store-redis | credential_store | HTTP surface, server-side tokens |
@gooi-marketplace/memory implements every port with correct, in-memory behavior. Use it for local development and for running your scenario suite.
@gooi-marketplace/memory is not for production. Its data does not persist across restarts.
What's next
- Capability Binding: how requirements, resolution, and the lockfile work together
- Control Plane: deterministic report/plan/apply/drift/generate workflows for bound capabilities
- The Marketplace: how to discover providers and understand trust tiers
- Building a Provider: how to implement capability ports and publish a provider