findAll()
}
```
Codegen tools will generate gRPC stubs from IDL code
An example of Thrift, an IDL used in Facebook's RPC framework
https://github.com/facebook/fbthrift
note:
Many IDLs have been developed over time. Mozilla, Microsoft, IBM... and more developed their own internal RPC frameworks with their own IDLs [2]
In the paper mentioned above, they wrote the interface using the Mesa interface modules feature:
`This generation is specified by use of Mesa interface modules. These are the basis of the Mesa (and Cedar) separate compilation and binding mechanism [9]. An interface module is mainly a list of procedure names, together with the types of their arguments and results`
[2] https://en.wikipedia.org/wiki/Interface_description_language
---
*gRPC is a modern open source high performance Remote Procedure Call (RPC) framework that can run in any environment.*
https://grpc.io/
note:
google Remote procedure calls
"gRPC was initially created by Google, which has used a single general-purpose RPC infrastructure called **Stubby** to connect the large number of microservices running within and across its data centers. In March 2015, Google decided to build the next version of Stubby and make it open source. The result was **gRPC**"
---
### Why a framework?
gRPC dictates how you will build your network interface.
Code is generated for you batteries included, you must only fill the gaps.
note:
All the underlying details about networking, encoding & more is handled for you.
It is more a framework in the sense of servers. They must use the generated Server Stub, with the only need of implementing the Service interfaces.
Clients will use the generated client Stub. For them the gRPC code will be less intrusive and will feel more like a library
Some implementations wrap the original C library, some don't.
---
### Built on top of HTTP2
So we get for free
- **Multiplexing**
- Header **compression**
- **Server push**
- **TLS**
note:
Explain multiplexing and server push
---
### 4 types of RPC supported

note:
Explain that each of these RPC types can be specified on the protobuffers IDL
---
### Metadata
Key-value pairs of data used to provide additional information about a call.
Implemented using HTTP/2 headers.
https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2.md
note:
gRPC metadata can be sent and received by both the client and the server. Headers are sent from the client to the server before the initial request and from the server to the client before the initial response of an RPC call.
On the link I show, they document the supported values for metadata
Can be useful for: Authentication & tracing
---
### And many more features
- **Health checking** (Service-specific health checking)
- **Interceptors** (Middleware for RPCs)
- **Reflection** (Service discoverability & ease debugging)
- RPC automatic & manual **cancellations**
- Call **retries**
- **Flow control** for streaming
- **Load balancing** (Client requests can be load balanced between multiple servers)
note:
It is important to explain that these features might differ from language to language, since it depends completely on how each of them implements gRPC
- **Flow control** is a mechanism to ensure that a receiver of messages does not get overwhelmed by a fast sender. Flow control prevents data loss, improves performance and increases reliability.
- **Reflection**: Explain that we won't go in detail about reflection but that I believe we should research more about it since it can be useful for better developer experience
- **Health check**: gRPC specifies a standard service API ([health/v1](https://github.com/grpc/grpc-proto/blob/master/grpc/health/v1/health.proto)) for performing health check calls against gRPC servers. An implementation of this service is provided, but you are responsible for updating the health status of your services. It is pluggable, and some languages might not provide it.
---
### Protocol buffers
*Protocol Buffers are language-neutral, platform-neutral extensible mechanisms for serializing structured data.*
https://protobuf.dev/
note:
Explain that it is the default binary serialization format supported by gRPC
It is also developed by google.
---
### They are a combination of
- The **Interface Definition Language**
- The compiler that **generates code** from IDL files
- Language-specific **runtimes**
- The **serialization format**
note:
Here we will focus on the IDL and the tooling, we won't focus on the serialization format.
---
### Protobufs as an Interface Definition Language
---
### Defining messages
```protobuf
// amend_termination/request/v1/request.proto
syntax = "proto3";
package amend_termination.request.v1;
message AmendTerminationRequest {
string policy_id = 1;
google.protobuf.Timestamp requested_at = 2;
google.protobuf.Timestamp interruption_at = 3;
optional string description = 4;
oneof reason {
CustomerTerminateReason customer = 5;
PrimaTerminateReason prima = 6;
}
}
```
---
### Defining a service
```protobuf
// service/v1/service.proto
syntax = "proto3";
package service.v1;
import "amend_termination/request/v1/request.proto";
import "amend_termination/response/v1/response.proto";
service PolicyManagementService {
rpc AmendTermination(AmendTerminationRequest) returns (AmendTerminationResponse);
}
```
---
### Remarkable features of Protocol buffers
- **Strongly typed** data
- **Language** and **platform neutral**
- **Compact binary format**
- Support for **RPC service definition**
- **Backward and Forward compatibility**
note:
Give a short example of why it is backward and forward compatible. Mention tags.
---
### The protoc compiler
Compiles `.proto` files into code.
Supports plugins for different languages.
```bash
protoc --proto_path=src --python_out=build/gen src/foo.proto
```
note:
`--proto_path` specifies the source directory, `--*_out` the destination directory, and the rest is the path to your `.proto`
---
### Buf CLI
- A **linter** for proto files
- A **formatter** for proto files
- A system to organize your proto files by **workspaces**
- A feature to check for **breaking changes** in your definitions
- A **plugin system** to compile proto files into multiple formats
- **Editor integration**
- And more!
https://buf.build/product/cli
note:
Explain that it builds on top of protoc. Be very short here, just mention the tool briefly. It is important because we use it.
---
## gRPC in the Rust ecosystem
:heart:
---
# Tonic
*A gRPC over HTTP/2 rust implementation focused on high performance, interoperability, and flexibility*
https://github.com/hyperium/tonic
note:
It has first class support for async/await.
The main goal of tonic is to provide a generic gRPC implementation over HTTP/2 framing.
Codegen tools need to be used to generate the client and server stubs that will encode and decode the binary data and deal with other gRPC features such as streaming.
---
### Features
- **TLS**
- **Load balancing**
- RPC cancellation via **timeouts**
- Request/Response **compression**
- Bidirectional **streaming**
- **Health check** of services
- **Interceptors**
- **Reflection**
- Client & Server **stub generation**
- Extensible via **Tower** services
note:
These are only a few notable features, it provides more for sure
---
### Generate code from Proto definitions :gear:
```rust
// build.rs
let mut prost_build = prost_build::Config::new();
prost_build.compile_protos(
&[""],
&["proto"],
)?;
tonic_build::configure()
.compile_protos(
&["proto/es_policy_grpc/service/v1/service.proto"],
&["proto"],
)?;
```
note:
First we need to talk about how do we generate code from our protobuf definitions.
---
### Expose the generated code as a library
```rust
// lib.rs
pub mod policy_service {
pub mod v1 {
include!(concat!(env!("OUT_DIR"), "/es_policy_grpc.service.v1.rs"));
}
}
```
note:
We need to expose the generated code through our lib.rs
---
### Auto generated services
```rust
pub trait PolicyManagementService {
async fn withdraw_policy(
&self,
request: Request,
) -> Result, Status>
// ...
}
```
note:
We get a trait generated from the Protobuf Service definition
---
### Building a server
```rust
// main.rs
let server =
// gRPC server implemented on top of HTTP2
Server::builder()
.add_service(
// Policy Management Server Stub
PolicyManagementServiceServer::new(
// Implementation of the service
PolicyManagementServiceImpl::new(application)
)
);
let listener = TcpListener::bind(("0.0.0.0", grpc_port)).await?;
server.serve(listener).await?;
```
note:
Simple build of a Tonic Server. We will dive into how to add middleware later.
Highlight the fact that at the end of the day the gRPC server will be listening to a TCP port like any other HTTP2 server.
---
### Building a client
```rust
let mut client =
// Auto-generated client stub
PolicyManagementServiceClient::connect("http://[::1]:50051").await?;
let mut request = tonic::Request::new(GenerateContractRequest {
// ..
});
let token: MetadataValue<_> = "Bearer some-auth-token".parse()?;
request.metadata_mut.insert("authentication", token);
let _response = client.generate_contract(request).await?;
```
note:
What if we wanted to add those headers for every request? Now we talk about interceptors
---
### Interceptors
Interceptors are similar to middleware but with less flexibility.
They allow you to:
- Add/remove/check items in the metadata of each request.
- Cancel a request with a `Status`.
---
### Interceptors in practice
```rust
fn check_auth(req: Request<()>) -> Result, Status> {
match req.metadata().get("authorization") {
Some(t) if is_valid(t) => Ok(req),
_ => Err(Status::unauthenticated("No valid auth token")),
}
}
let svc = PolicyManagementServiceServer::with_interceptor(
PolicyManagementServiceImpl::new(application),
check_auth
);
```
---
### Health checking gRPC services
Tonic provides a health check service implementing a standard gRPC health checking protocol.
https://github.com/grpc/grpc/blob/master/doc/health-checking.md
note:
A GRPC service is used as the health checking mechanism.
Since it is a GRPC service itself, doing a health check is in the same format as a normal rpc.
It has rich semantics such as per-service health status.
The server has full control over the access of the health checking service.
---
### Health service definition
```protobuf
service Health {
rpc Check(HealthCheckRequest) returns (HealthCheckResponse);
rpc Watch(HealthCheckRequest) returns (stream HealthCheckResponse);
}
```
This definition is provided by the official gRPC docs, each language runtime might implement it or not.
https://github.com/grpc/grpc/blob/master/doc/health-checking.md
---
### Enabling the health service
```rust
let (health_reporter, health_service) = health_reporter();
health_reporter
.set_serving::>()
.await;
Server::builder()
// Add other layers
.layer(..)
.add_service(health_service)
.serve(addr)
.await?;
```
note:
Make it clear that we are using the `tonic-health` crate which doesn't come by default with `tonic`.
---
**What about more complex middleware? What if we need to also intercept responses?**
Let's dive into Tower
---
# Tower
note:
Tower is a library of modular and reusable components for building robust networking clients and servers.
Tonic is built on top of Tower
It's core abstraction is the Service, which we see in the next slide.
It exposes already a set of basic reusable services to solve common networking patterns such as timeouts and rate limiting.
---
### Tower service
```rust
pub trait Service {
type Response;
type Error;
type Future: Future