Skip to main content

Pokeshop API Demo

As a testing ground, the team at Tracetest has implemented a sample instrumented API around the PokeAPI.

The idea is to have a microservice-divided system that behaves like a typical scenario by having async processes (RabbitMQ and Kafka), cache layers (Redis), database storage (Postgres) and simple CRUD interfaces for Pokemons.

With this, users can get familiar with the Tracetest tool by focusing on creating assertions, visualizing the trace and identifying the different data that comes from the Collector (Jaeger). Users will learn about basic instrumentation practices: what tools to use, what data to send, when, and what suggested standards need to be followed.

info

Want to run tests against the Pokeshop Demo without installing it locally? Click the link below and we will add you to our tracetest-demo organization and give you access to the pokeshop-demo org as an engineer. You can run and create your own tests!

👉 Access the shared demo, here.

Use Cases​

We have three use cases that use each component of this structure and that can be observed via Open Telemetry and tested with Tracetest. Each one is triggered by an API call to their respective endpoint:

  • Add Pokemon: Add a new Pokemon only relying on user input into the database.
  • Get Pokemon by ID: Given a Pokemon ID, this endpoint returns the data of a Pokemon. If the same Pokemon was queried, the API will use its cache to return it.
  • List Pokemon: Lists all Pokemons registered into Pokeshop.
  • Import Pokemon: Given a Pokemon ID, this endpoint does an async process, going to PokeAPI to get Pokemon data and adding it to the database.
  • Import Pokemon from Stream: Listening to a Stream, this use case also does an async process, going to PokeAPI to get Pokemon data and adding it to the database.

System Architecture​

The system is divided into two components:

  • an API that serves client requests,
  • a Queue Worker who deals with background processes, receiving data from the API
  • a Stream Worker who handles import events sent from a stream

The communication between the API and Queue Worker is made using a RabbitMQ queue, and both services emit telemetry data to Jaeger and communicate with a Postgres database. Additionally, a Stream Worker listens to a Kafka stream to see if there is any import event sent on it to execute.

A diagram of the system structure can be seen here:

In our live tests, we are deploying into a single Kubernetes namespace, deployed via a Helm chart.

The Pokeshop API is only accessible from within the Kubernetes cluster network as Tracetest needs to be able to reach it.