Jon Christensen and Chris Hickman of Kelsus discuss service-to-service authentication for microservice APIs.
Some of the highlights of the show include:
- Who’s calling? Authentication is fundamental to designing APIs
- Two types of authentication:
- User authentication
- Service-to-service (microservice) authentication
- Service Mesh, Istio, SPIFFE: Give secure identity to components of distributed system
- Pros and cons of suitable and simple options, including signed JSON Web tokens (JWTs) and X.509 certificates/API keys
- JWT Components:
- Payload claim
- Can you keep a shared secret? Protect password known by recipient and caller
- Implementation Consideration: Don’t lose original user in chain of service calls and dependencies
Links and Resources
Rich: In Episode 63 of Mobycast, Jon and Chris discuss service-to-service authentication for microservice APIs. Welcome to Mobycast, a weekly conversation about cloud-native development, AWS, and building distributed systems. Let’s jump right in.
Jon: Welcome, Chris. It’s another episode of Mobycast.
Chris: Hey John, good to be back.
Jon: Yeah, great to have you. We’re missing Rich today because we’re doing this on a Friday instead of a Thursday and Rich is busy today, so we’ll miss him. Today, we’re going to talk about something that I’m pretty excited about because it’s not something that I know a lot about. It’s one of the things that I think makes working the microservices difficult. I think the kind of thing that gets a lot of Google searches because people are like, “What do I do about this?” The thing is authentication for microservices.
It’s just that, “I have to do this. I have to check this box. How do I deal with it? What’s the way for me to not write any code and check this box?” I think is a way a lot of people approach to this especially at the beginning of a project. We’re going to get into what are the ways of doing it, what are some good practices. I hope to learn a lot through this. Because it’s such a big topic, I’m hoping to get this fit into one episode, we’re not going to find out what you’ve been up to this week. It’s going to be a mystery but let’s just get started, Chris. Tell us what we’re going to talk about.
Chris: We’ll see if we can stick to that one episode. I kind of suspect this is not going to be a two-parter just because there’s a lot here and I’m kind of hoping it is. I think even though authentication feels like security, kind of feels like a yawner, I think it’s actually going to be pretty interesting.
Jon: Yeah. This is different than security to me. People aren’t like, “How do I secure my microservices?” They’re like, “Crap, I know I have to do this, so let me Google how to do this.” Everybody’s trying to figure this out, so I think it’s different.
Chris: Yeah, I think this is more like just API design too, right? You asked for […] set this up. We build microservices, we do APIs whether they be RESTful, graphQL, or RPC based or whatever it may be. But you’re going to have endpoints, maybe you do have some that anyone can hit. You don’t need any kind of identity to them, so they’re not authenticated. But by and large, you’re probably building APIs that should be authenticated even if for no other reason, you just want to do things like being able to do rate limiting and quotas, and just keeping track of who’s calling what. Chances are most of your APIs are going to require some sort of authentication. You need some identity to associated it with that. It’s fundamental to the APIs that you’re designing.
Jon: Even if there are dependencies there, like one microservice calling another one, calling another one, calling another one, they can preserve that user context from call to call to call, as long as they are all synchronous. Right?
Chris: Oh, man. That’s actually another big can of worms. It will be interesting. Maybe we can circle back on that.
Chris: Do you actually pass along the context of who called you as you have a chain of microservice calls? You can do that and there are pros and cons to it versus do you not bubble back context down and instead, is it now becoming a service-to-service call? If we got the time, we can circle back to it and talk about it. That’s one of those implementation things that get really hairy quick. It gets actually really complicated. I’m glad you brought that up but maybe we’ll circle back on that one.
Broadly, there are two types of authentication that we have with our microservices and our APIs. We just talked about user authentication. That’s when you actually have the identity, it’s an actual end user. The identity mechanism is coming from some well-known system. There’s lots of plumbing out there for doing it. You can use identity as a service provider, things like Auth0 Okta, OneLogin, even AWS Cognito. These are all just existing, off-the-shelf systems, frameworks that you can use to implement your user profile, your user identity store type thing. You can use things like OAuth or SAML, Security Assertion Markup Language systems to do identity and things like single sign-on.
At the end of the day, you have these end users, they’re registered with your system somehow, they have accounts, and at the end of the day, they’re providing credentials, let’s say, user ID, password. In exchange for that, we now have a well-known identity and that context is what gets passed to microservices. That’s user authentication, that’s the common path, that’s the well-known path. We’re not going to spend much more time talking about that because it’s a pretty well-known, easy-to-solve problem.
What we really want to focus in this episode is, what happens when the callers are not actually users but they are other microservices? Let’s call it service-to-service authentication. This brings up an interesting problem like, “How do we give these callers identity?” This is a common […], once you start building services and have definitely more than a handful, you run into this issue where as you do your refactoring and your architecture and design, you’re going to have core services that provide services to other microservices. Then now the service becomes the client of this other service.
You can think of it maybe as a hub-and-spoke architecture or again, just as a natural progression of your design as you have more and more services. This is going to be a more and more of a common occurrence for you where other services will need common functionality, and then they go and they need to make API calls to these other services. How do we handle identity there? And that’s what we want to focus on in this episode.
Jon: Can you give us, just off the top of your head, an example of a service that we can understand, that would need to call another service? Just some real-world example?
Chris: Yeah. A real one would be configuration state. Imagine you want to have a way of handling configuration in a real-time way across your applications. It could be just whatever it may be like setting. A real-world example is we have an application that allows the export of data in CSV format. For various reasons, we need to be able to limit how the scope of those exports, to limit the amount of data that’s being done. We have some configuration settings and say, “Hey, let’s limit it 3,000 rows of data.” That is a configuration setting. Typically, you would do something like adding it as a config file into your service. Maybe you can do this in environment variable. There’s a lot of techniques. You might want to do it as maybe database or something.
Jon: […] it.
Chris: Yeah, and then there’s that too. The point being is, for the most part, that config, that’s going to be something where if you want to change it, and say, “Oh, instead of being 3000, we want to bump it up now to 5000.” That may be a code change and a redeploy to change that. It’d be really nice to have more of like an admin UI where some administrator could go in and just change that. When that setting is changed, push the changes out to anyone that is interested in that. This way, in real time, it can get the config update and that’s now live.
You might have these microservices responsible for hosting that distributed configuration, before allowing other folks to register with it and say, “Give me my configuration,” and also to register for callbacks when changes happen. That configuration, that distributed network configuration service would be one of these hub microservices that other microservices are going to be callers of.
Chris: Again, we want to lock that down, have authenticated APIs, know who’s calling in. That would be one such example.
Jon: Okay, cool.
Chris: This is the crux of the problem, how do we give these callers identity when it’s another microservices calling? This is where it is like the Wild, Wild West. This problem has kind of been around for a while especially once we switched from monoliths to the microservice buzzword. Basically, it’s just making our services refactored just as we would […] code.
Jon: I think what characterizes this problem too with sort from an attitude from about 10 years ago of like, “Oh, it’s all in our own data centers. It’s all safe. Services can call each other. Ehh, maybe we’ll throw in an API key.” That’s how I think things got dealt with for a long time.
Chris: I actually think that’s how they still get dealt with too, I mean, the majority of the cases. That’s a great point. That’s definitely one of the various possible ways that you could go about doing this, that’s definitely one of them. It’s probably one of the more common situations is just like, “We’re not going to do authentication for service-to-service call.” Maybe we’ll lock it down from a network perspective and we handle it that way. We’re not dealing with identity, we’re really just dealing with security, in that sense. We don’t necessarily know who’s calling us, but we know that whoever is calling us, it’s from the allowed space.
Jon: You have the key to that building.
Chris: Yes, yes. That’s absolutely one of the possible things that you can do and what people have been doing. Other possibilities are now with things like the service mesh becoming more popular — we talked about this in the previous episodes — that’s providing this functionality. In particular, we talked about Istio and one of its components, Citadel, that deals with security and identity. It uses X509 certificates to basically give identity to each one of its components in that system. It’s actually using something called SPIFFE.
SPIFFE is a new standard. It’s still going through that process of being defined, but it’s good for folks that are in the space of like, “How do you do secure identity for the components of a distributed system?” SPIFFE stands for Secure Production Identity Framework For Everyone, so this is absolutely being built for this exact problem.
Jon: I want to characterize their problem a little bit more. I had just made a joke of, “You have the keys to the building.” I think that is a good analogy because now the building is the cloud and its multiple regions, and its multiple availabilities per region, maybe different networks that are peered together. “Are you sure that your network engineer has tied everything down so nobody can get in? Are you really sure?” “Now, I’m not sure anymore. There’s so many firewalls running in, so many things that can potentially get left open. I better make sure that everything that I’m running protects itself.”
Chris: Absolutely. Again, there are two other pieces to this. One, is the security aspect of it, making sure that who’s calling me is allowed to. But then you also have the identity part like, “Is this service A or is this service B? Or is it just some anonymous caller that I’m pretty sure is allowed to call me that has the keys to the building?” There are the two pieces to it. Securing your cloud is not an easy job. Having that approach of just saying, “Okay, I have secured my cloud.” Just don’t deal with authentication, that’s kind of fraught with issues.
SPIFFE is this standard that’s been developed by a consortium of folks. It’s comprehensively dealing with these issues and it has its own various components. It has ways of creating identity namespace and has ways of doing identity documents which at the end of the day, are either X509 certs or they’re signed JWTs. Then it also has an API for talking about how the components on the system can issue, retrieve, and make these requests.
I guess the main point here is that it’s not lightweight; it’s a pretty complicated, comprehensive system. If you’re a bigger company with lots of folks to work on this stuff, this is something you definitely want to go look at. It’s kind of what we talked about the service mesh, right?
Jon: If you have a bigger company and you have a lot of folks to work on this. Please do, so that it can be easier for the rest of us and we can start using it.
Chris: Yeah. Indeed. There are other more comprehensive, more complicated ways of dealing with this as well so you can do things like OAuth2 and have specific service accounts, and do the exchange back and forth, and have these accounts and negotiate for the credentials and use that as your identity. But again, quite a bit of worth to go implement your own OAuth2 implementation. And then you have more vendor-specific ways of doing it. This is along the lines of your analogy, “You have the keys to the building.” But if you’re in AWS or Azure, you can use things like IAM Roles and security groups to lock down who can call who.
Those kinds of approaches do run into some complexities if you have services that can be called by both users and other microservices. How do you deal with that? If you have authenticated APIs, sometimes you are going to have the user context, and sometimes you’re not, because it’s a microservice. How are you going to deal with that at the IAM security level? It gets to be a bit trickier.
Those are the more comprehensive, complicated ways of doing it. I think, again, for the purpose of this discussion, that’s all we need to say about that because now, I’d like for us to focus more on the simpler approaches to these that’d be perhaps more suitable and practical for most folks out there. Because most of us don’t work at Netflix, we’re not at Google, and we don’t have thousands of microservices. It’s really more a handful.
Jon: I also want to say that these approaches that we’re about to go over can probably still get pass a security audit by a qualifier if you’re building a payments application. It’s not that they’re not secure, it’s just that they’re maybe easier to approach from a development perspective.
Chris: Again, great point. That’s absolutely true. What we’ve talked about forms the basis for some of those other systems. Specifically, something like SPIFFE, at the end of the day, SPIFFE, it’s either X509 certs or it’s signed JWTs, and we’re going to talk about signed JWTs. It’s very similar to what SPIFFE is going into and basically trying to formalize. We’re just taking a piece of it. We’re not going to go make it adhere to any one particular standard and deal with all the various edge cases that they’ve thought about. Instead, we’re just going to keep it at a practical level. But at its core, from identity, from authentication standpoint, it’s the same technique, and it’s just as valid.
Another point of reference, if you will, is these signed JWTs, and then this is actually one of the things that Google recommends for service-to-service authentication for their Google Cloud and Open API, this exact same technique. Even though, again, these are simpler, it doesn’t mean that they’re any less valid or secure or robust.
Jon: Cool. Tell us more.
Chris: There’s two. We’ll call these the simpler more practical approaches for service-to-service authentication. The first one is basically just API keys and the second one we’ll call signed JWTs. First one we talk about API keys because I think this is something that’s pretty common out there, it’s pretty well-known.
Jon: They’re in your email, they’re in your GitHub account.
Jon: They’re everywhere.
Chris: If you’re using something like an email sending service like Mailgun, or MailChimp, or whatnot, chances are you’re registering to make API calls using an API key for them. They give you an API key and that ends up becoming the identity for it, something for like a lot of webhook type of systems and whatnot. In this particular case, an API key is really it’s a password is what it is. That password identifies your identity as well. It’s a unique value that’s generated by whatever issued it. When it gets issued, you’re basically saying who you are. By that one unique value, you’re getting basically both identity and the verification that you are who you are by virtue of the fact that you know that secret, that password. That’s really what an API key is.
Really easy to implement because it’s basically just a password. The most common way to do this is just throw it into HTTP request tether, into the authentication header, and put your API key in there, at it its core, that’s not secure because it’s just base64 encoded but you send it over TLS and now it’s encrypted. It ends up being a pretty good, decent way of handling the identity and authentication issue as long as you’re going over TLS, it’s secure. It works. It’s easy. Again, this is pretty common, pretty popular, you’ll see it out there. It’s really easy to roll your own as well.
But there are some downsides to this. One is, you now have to manage a list of these keys. You’ve to decide, what’s the level of granularity for these keys, and this for you, you the implementer, to decide for your system that you have one API key for all your services, do you do it on a per service basis, do you issue them to individual callers. It really comes down to what level of granularity you want for identity. Also, what is the blast radius from a security standpoint if the key gets compromised. These are the things you have to figure out. It adds complexity for how many keys you have to manage. It makes it much more difficult to do things like rotation of credentials the more keys you have and just how are you going to do this given the […].
Jon: They’re probably hardcoded into people’s code.
Chris: Yeah. That’s another […] to another con with these API keys is that, at the end of the day, they are something the developers need to know so you need to pass them around like, “Hey, what’s the API key for hitting the service on staging?” Now, how are you going to share that? Do you do it at Slack? Is it a DM? Is it chat? Is it email? You should use something like Lastpass, some secure password credential sharing service. Who knows, maybe someone in Tesco would accidentally comments it to GitHub and then poof, now you’re hosed. Now it’s time to do rotation.
Those are some of definitely the cons with API keys. One of the reasons why, in general, we don’t use that method. Simple, you can make it work, just know that there are a longer list of disadvantages or catches with it that you’ll have to deal with it.
The second approach that I wanted to talk about is the signed JWTs. This is something that we have done here at Kelsus with this pattern and it’s worked really well for us. It’s also been the most similar to keeping parity with the other form of authentication, the user authentication. At the end of the day, the user authentication ends up looking like a JWT and with things like identity and stuff, “What’s their account name?” Maybe it has some information about the user, like first and last name, email address, that kind of stuff.
We can do the same thing with JWT for our service authentication. We can get things like what’s the name of the calling service and information about it. Perhaps even things like what roles it has and levels of access or whatnot — some additional information about the context of the call. Having a JWT gives us that flexibility. And we can actually go build a separate microservice for handling some of those things for issuing these JWTs and be responsible for doing things like, “Okay, what are the roles and what is the level of access?” And not just authentication but get into things like authorization. So, pretty flexible but also you can keep it really simple as well.
Jon: I think maybe some people listening might now, know what a JWT is, JSON web token, can you tell us what that even is?
Chris: Yeah. It stands for JSON Web Token and really, all it is, is a JSON document. It’s basically composed of three components. It has a header which indicates one that’s a JWT. It specifically says, “This is what I am, I’m a JWT.” It also says, what’s the cryptographic algorithm for signing this, so that’s in the header. The payload for it is basically, where the claims go. Claims is basically the identity information. There’s some very well-known claims, most of them are three-letter abbreviations. But at the end of the day, they indicate who they are, maybe who they’re trying to talk to. You can have the standard claims in there, the standard properties of the claims in there or you can extend it at your own. You make things that you want to put in there.
Jon: Is there like three-letter claims, I can’t even imagine what that would be. Can you think of any off the top of your head?
Chris: Some of the common ones are EXP which stands for expiration. It says, “This is how long this JWT has this TTL.” Another one is AWD, stands for audience. That is, what’s the name of the thing that I want to talk to. SUB, I think that’s probably short for subscriber. We use that for the name of the service that’s calling, making the call. It’s the client if you will.
Jon: It’s the service that’s making the call that initially constructs this JWT. It’s going to put one together and send it to the service that is calling. The service that is calling is going to physically evaluate it and make sure that it looks right.
Chris: The simplest way of doing it is just two services talking to each other. It’s the initiating service — the calling service — that needs to put together its identity document and that’s that JWT. That JWT gets created and that gets sent with its request. Then on the receiving side, it can then look at that JWT and decide whether or not this is valid and to allow it.
We’ve talked about the header part, we’ve talked about the payload part with the claims, and then the last part of it is the signature. This is the calculation. This is based upon the cryptographic method I used for signing this, using a secret, here is the signature. This is where the authentication part comes in to verify the identity and the integrity of the claims within it by having that signature. Of course, another JWT to be able to basically run the same algorithm on both sides. Both the initiator and the receiver run the same algorithm using the same secret, basically a password. If they come up with the same result and that signature matches, then they know that, “Hey, we’re allowed to talk to each other. I know that whoever sent me this is who they are by virtue of the fact that…”
Jon: No. Wrong. “I know whoever sent this has the password.” It could be […] something else […]…
Jon: …but you don’t […] they are who they are.
Chris: Sure. Yes, they have the password which is as good as we can do in this. That’s kind of like as good we can do.
Jon: I know we’re […] right?
Chris: Yeah. Absolutely.
Jon: Cool. Actually, personally, I haven’t implemented JWT before, so that was a nice little lesson for me. Thank you.
Chris: We’ve kind of covered the bulk here of what this approach is. From a caller standpoint, when they want to make a call to another microservice expecting authentication, it’s their responsibility to construct one of these JWTs, one of these identity documents. They really key part of that is that they need to be able to cryptographically sign it, so they have to use this shared secret. It’s got to be a secret that’s known by both the recipient as well as the caller.
Again, this is the password, and this is something that you need to protect and make sure that the scope is limited to who can use that. This is where you can use leverage, just the various ways that you can protect things throughout the system. You can use a secret’s manager, something like Vault from HashiCorp or secret’s manager from AWS. You can use things like S3 Buckets and policies and lock that down. You might say like, “This service is only allowed to be called by these services.” Only allow those three other services to access that S3 key.
Jon: As you’re explaining this and as I’m learning this, I’m having these aha moments about what JWT provides, it’s so cool. If you are just to have say a username or like ID, then it will be the responsibility of the caller to kind of keep a record of who’s who. “Oh, ID 14656. That’s the configuration client that we had set-up before. It’s stored in this database that I keep because I’m a service and I have to know who everyone is.” Whereas, with JWT, it seems like you can get new callers without having to maintain who they are as long as they meet your specification. You have to say, “You got to tell me your name.” Maybe you can even have it tell you other things like, “Where you are.” “I’m in AWS region us-east-1. This is my availability zone.”
You could do all kinds of cool things. Additional information you could put into that JWT and then all of that information could be log-able. If it’s not there, you could say, “Well, you gave me the right key, everything was signed great, but you didn’t include the information I wanted in the JWT. You’re denied. Sorry.” It’s pretty cool that you can do that.
Chris: Yes. Very, very flexible. That was the points of the cons with API keys is that you have to maintain that list of keys and thus, the keys are both the identity and the authentication. That means you probably have some database table that’s keeping track of this particular API key, “This is the description of who that user is or that caller is.” You have to have TTLs on it. All the information that goes along with that versus with the JWT approach, you’re basically saying, “That information is the responsibility of the caller.” But they can actually put whatever they want inside there. They can say whoever it is they want to be so it’s up to you when you’re designing this and coding this, you need to have some conventions and you need to have a reasonable way of doing that.
But this particular approach is not saying one way or the other. Really, what it’s guaranteeing is that through that shared secret, it’s basically verifying that both parties have that same shared secret, they have access to the shared secret. Therefore, they’re allowed to talk to each other and therefore it’s authenticated, and it’s passing the identity information in that JWT, and that information came from caller. It’s completely flexible and sensible.
Like you said, the receiver can be like, “Well, I need this other thing.” I’m now implementing RBAC, roles-based access control so, “I need a list of roles that come through on that as well.” You can have something like another microservice that’s responsible for generating your JWT and determining not only authentication but also authorization. Lots of flexibility there with this kind of approach. The great thing about it is you don’t have to do any re architecture or design. You can extend all this with really no changes to the rest of your system.
If you want to have some standalone microservice that’s responsible for doing your JWT generation and it’s the one that is knowing about secrets and keeping track of who can call who. That can all be done without really changing the recipients. The services that are in call, they don’t care, they’re still just getting the JWT. They don’t care whether actually it was created by the caller or the caller used someone else to generate it.
Jon: If the JWT doesn’t validate, it’s not signed right, or it doesn’t have the contents that you’re expecting then […] in HTTP can send back a, which I guess, it would be JWT can send back a 401.
Chris: Absolutely. That’s what these recipients, the services, are doing. They’re getting that JWT, again, it’s coming across in HTTP request tether, so you use the authentication header to pack that JWT into it is essentially the password information if you will. Again, this is going to be going over to TLS, so it’s encrypted. It’s going to extract that JWT form the header, it’s now going to do a decode operation on that using a shared secret and compare it to the signature that’s in the JWT. If they all match and it says, “Okay, this thing is valid,” and that’s authentication identity, it could now do another.
If that doesn’t work, if it’s not valid, if the signature is incorrect, if the decoding fails, then it can now just reject it as a 401, unauthorized. If it does succeed, if the service wants to, it can now go further and say, “Okay. I know who this identity is. Do I want to do further checks?” I’ll now do authorization checks if you will. I can now put in rules like, maybe I won’t accept requests from these two services. If identity is not that then I can reject it there. But that’s again, the extensibility of the system and however you want to do it.
Jon: Right. It’s not in our outline and so this is maybe putting you on the spot a little bit, but if somebody is listening and they want to implement this and they want to get a bit of a head start, do you have anything that you would recommend that they can look up or look for in order to not reinvent the wheel when it comes to implementing JWT as a microservice authorization?
Chris: Definitely, use a JSON library and a JWT library for handling this and for handling the encode and decode stuff. There are tons of libraries out there. Don’t go and try to reimplement the wheel. It literally is, for almost whatever language that you’re on, whether it’s Ruby or java or node or .net, to do this creation of a JWT or to do the decoding of it shouldn’t be much more than 10 lines of code using one of these libraries.
Go to jwt.oi, that website, that is a great website with lots of information. I believe it’s hosted by AltZero and they may be one of the original backers of the JWT spec. They have their own libraries that are out there that are open source and available. But every platform out there has libraries for dealing with JWTs and for signing them, for encoding and decoding them. Definitely use that.
Really, it just boils down to what is your mechanism for distributing the shared secret and making it so both the caller and the receiver know about it. You’re going to roll your own on that. Just whatever you do, make sure you’ve thought about that and you rule in it the scope of who access to those shared secrets. Also, basically make it such that developers don’t need to know that. Just make it programmatic through code. It makes it a little bit more challenging for developers. They can’t just do an ad-hoc cURL statement, they’ll have to probably run a script to generate a valid JWT. But it’s so worth it because it’s like, “Just let the code go and make the secure fetch to get the shared secret to go make the JWT.” As opposed to passing around the shared secret so that anyone can then go and do it.
Leverage the libraries that are out there for dealing with JWTs and singing and for encoding and decoding. Then really think hard about how you’re going to protect your shared secrets.
Jon: For sure. I think that kind of covers what we wanted to talk about in terms of microservice authentication. If you’re done listening, thank you. But there was that one part that I’d like to spend just a few more minutes on for those people that have time about if you have a chain of service calls and you’ve got a user contact on the first one, and then services are calling each other after that in the chain of dependencies, wouldn’t you just pass that credential along from service-to-service?
I think that I understand now what the answer might be, is that, instead of doing that, we could maintain that contacts as part of our JWT information from one service to another instead of using the original JWT that had users contacts in the first one.
Before you answer, I guess I want to preface this with saying, if there’s a deep down dependent service that’s been called by several other services before it but it was all initiated by a user, I would assume that in our information gathering systems, we would care about who that original user was. To lose that information would seem to be a bummer. Some database gets updated and we sure hope that we know that it was Charlie that really wanted that database update to happen even if it was microservice XYZ that ultimately made the call to update the database. Right, Chris? We do want to know that user information even deep down in a list of service that have dependencies, we want to remember that because that’s important.
Chris: Again, this gets into implementation considerations and it has to deal with your architecture, design and just the various microservices and how cohesive they are with […] model. A good example of this would be — and this is a real-world example that I run into the past — imagine you have a photo sharing application. Photos, I think they are a certain type of image. Images are, at the end of the day, it’s just a binary file. Let’s just call it blobs.
Think about that system. This is actually three different services. Maybe I have my friend in my application service that does all my application specific logic around photo sharing and all the things it can do there. Maybe I have a separate microservice that all it knows about is just images and maybe all the functionality it has. Things like image filters, can do things like resizing, it can do effects, it can do things like flip and crop and stuff like that or just anything around images. Then maybe I have a third service that all it’s responsible for is blobs. Its responsibility is making sure that I have the ability to do cred on blobs in a very scalable way
The image service will use the blob service and the photo sharing service will use the image service. That’s kind of like the chain. You definitely have user authentication coming in on the frontend to your photo sharing service. Now it gets tricky, “Do you still pass that same user contacts down into your image service? Did you design the image service so that it had the same exact user base?”
Jon: It doesn’t matter, yeah.
Chris: Right? It gets even more tricky going down to the blob service. “Was that really designed with that in mind?” And it shares the same user base as your photo sharing service. At some point, there’s a breakage where it just doesn’t keep extending. There needs to be a separation. You have to decide, “At what level in my system am I doing that? What’s responsible for doing the authentication and the authorization?” You may decide that, “That’s all done at the frontend, at the photo sharing service.” And all the identity, all the authentication, all the authorization, questions are handled there. And then, at the point, now the authentication is done at the service-to-service level between an image service and also between image service and a blob service. They’re just saying like, “We don’t care about that users contacts because we don’t even know about that database.”
Jon: It looks to me, the way to break it down and the rule of thumb is that if you’re microservice is acting on stuff that might change the state of a particular user that’s using your app, then you would want to have that contact but if you’re a microservice that’s doing what you might call generic work, then who cares, that you do generic work? As long as it knows what other services are asking for that generic work then you’re good to go.
Chris: Yeah. I think that it just really boils down to architecture design but it’s one of those things where it seems like a natural inclination. It’s like, “Yeah, I just passed the user contacts down.” It’s really easy. “The blob service knows that it’s John Smith that made the call.” But when you get to implementation, it’s like, “Wait a minute. How does it know who John Smith is?” It gets pretty complicated pretty quickly.
Jon: Makes sense. I think that’s a good way to round it up.
Chris: I think so. We actually got through a lot more than I thought we did. Although I don’t know how much time we spent. It’s gone by quickly for me.
Jon: People are sitting and waiting to try to go in for dinner at this point. […] sitting in the driveway listening, I hope.
Chris: Yeah. If only.
Jon: Alright. Thanks a lot, Chris. I’ll talk to you next week.
Chris: Alright. Thanks, Jon. See yah.
Rich: Well, dear listener, you made it to the end. We appreciate your time and invite you to continue the conversation with us online. This episode, along with show notes and other valuable resources, is available at mobycast.fm/63. If you have any questions or additional insights, we encourage you to leave us a comment there. Thank you and we’ll see you again next week.