In Episode 182, Ben and Scott sit down with Nills Franssens, a Senior Cloud Solution Architect at Microsoft to talk about his new Kubernetes book, “Hands-On Kubernetes on Azure”.

- Welcome to Episode 182 of the Microsoft cloud IT Pro Podcast recorded live on June 12th of 2020. This is a show about Microsoft 365 and Azure, from the perspective of it pros and end users, where we discuss a topic or recent news and how it relates to you. In this episode, Ben and Scott hop on a call with Nills, a Senior Cloud Solution Architect at Microsoft and the author of "Hands-On Kubernetes on Azure," which is currently available for free on azure.com. We have another interview episode for everyone today, I'm excited to go through this one. So we have a guest who I've done some work with in the past, through a couple of Microsoft projects that I've been involved with and some community programs over there, like OpenHack. And yeah, he just recently wrote a book too, which having participated and written some technical materials before I very much feel the pain and we can commiserate about that if you want to as well, but Nills, why don't you go ahead and introduce yourself here real quick.

- Hi, my name is Nills, I'm a Senior Cloud Solution Architect with Microsoft based in San Jose, California. And as Scott said, I recently wrote and published a book on running Kubernetes on Azure, which was a very fun experience of writing the book outside of Kubernetes. My main area of expertise within Azure is everything related to infrastructure, networking, storage and the general automation of the platform. And I'm actually very happy to be here and talk to Ben and Scott 'cause I'm a listener of the podcast as well.

- Alright, well, good to hear it. And did you actually say you had fun writing the book? I think you're the first person that I've ever talked to, that wrote a technical book that actually said it was fun.

- It definitely there're periods where it was not as fun, but I think the overall experience was a good experience. It took a lot of time and a lot of nights and weekends, 'cause I think the one thing that I didn't realize when I signed up was I knew Microsoft had a moonlighting policy. So I couldn't write anything during business hours and what I didn't realize was exactly how much time was required to write the book. But if I look back on the process, I find that might not have been the right word, but it's actually a really nice process that I went through. And it's fun to actually have the physical book in my hands right now. And I learned a ton just during the process, both technically and about becoming a better writer as well.

- Got it, I can see that, so I've never written a book. I think Scott has helped out on some, but I'm like, it would be, I think from my perspective, it would be interesting to go through the process. And I feel like you've learned a lot 'cause obviously you want the book to be correct and you wanna really cover everything. And I feel like once you actually had that book in your hand, like you said, it would be really rewarding knowing that you went through all of that and wrote it.

- Yes, and I think you're absolutely correct and you learn a ton and I think when you're writing a book, you actually need to go back and take a beginner's mindset about the technology that you're writing about 'cause once you're dealing with something and you've been working with something for a couple of years, certain basic things, you don't even consider them and you don't even think about what's underneath those basics. And then once you actually start writing a book, you need to go back to basics and figure some things out. And what I found important was using the right words and the right terminology to describe certain things. And just as a stupid example in Kubernetes, there's this thing called an ingress that you can use to do some layer-7 load balancing. So basically do routing based on the host name that you send. And there's an ingress and an ingress controller, and like I always drew both terms together and I didn't even realize what the difference was between an ingress and an ingress controller. I just mixed the words and when writing the book, I was like, what I'm writing this I actually want to use the correct terminology. And that was some of the things that I researched and learned because when you write a book, you actually want to be factually correct. You just don't just want to be, yeah, it works. You actually want to be correct in what you say.

- If you're a good author, yes, that's the way to go. So I make that mistake all the time. I tend to use those two synonymously as well. It's not something that I would very much think about along the way 'cause usually once you're down to the nitty gritty, you're like, what do I actually need to do? I need to set up a deployment for an ingress controller so I can actually get that software or that construct, whatever that thing is out there. So we can start routing traffic for me, now .

- One of the things that helped as well for me when I was writing this was I had somebody from the publisher 'cause the book is in self published. I worked with Packt as a publisher and I had... I don't know what the actual term is of the person, but I had somebody working with me who reviewed everything that I wrote. And she wasn't a Kubernetes expert, which was actually perfect because when I wrote something and something didn't make sense to her, she could actually ask a lot of clarifying questions, which I then had to research myself 'cause something's just seems so common sense when you're dealing with something for a couple of years that you don't realize, and then having somebody writing or working with you that actually points out to you, "Hey, why do you use this term and not that term?" Worked, helped me a lot in just tuning my own knowledge and sharpening my own knowledge.

- Oh, you should get on board that train.

- I am not a Kubernetes guy at all, well, I probably should. So does this book take it from, let's say someone of my level who literally knows just a little bit about Kubernetes from what Scott and I have talked about on the podcast to someone, to your level Nils, where you're like a complete and total expert on Kubernetes, or should you have some Kubernetes experience going into this? What does that level of the book and what is the, I guess the journey or the path that the book will take you through as you work through it or read through it?

- The book itself doesn't require you to have any prior knowledge of Kubernetes itself. The main focus on the book is by itself is "Hands-On Kubernetes on Azure." So the book is very practical with a lot of examples in it. And if I think about how we laid it out, there's three sections in the book itself. One is just simple basics where we cover what's Docker what's containers. What's this thing called Kubernetes and why do we even need it to setting up the cluster on Azure, which are like the absolute basics. Then there's a second section, which covers more of the Kubernetes' constructs that you need to know, like deploying the parts, deploying services, doing some ingress folk, how you could potentially secure certain things. So that's section two, which focuses more Kubernetes itself. And it touches on Kubernetes on Azure in a couple of places, for instance, with a cluster autoscaler that needs to interact a little with Azure, but it's that section. I think 90% of that section you could do on an Azure Kubernetes cluster, as well as on a Kubernetes cluster anywhere, let's say you run them, run it on your own laptop, or even on GCP or AWS. I think 90% of that content is pretty neutral. And then there's a third section on the book where we actually cover some more of the in-depth Azure integrations, where we describe how you can integrate with some past services. There's a chapter on Event Hubs, there's a chapter on my SQL databases, there's another chapter where we actually run Azure functions with KEDA which is a pretty new project. As your functions with KEDA on an Azure Kubernetes cluster, you don't have to have any prior knowledge if you wanna start reading the book. And if you actually have no prior knowledge, this is a good way to get you started from level 100 to level 300. The book itself doesn't make you a in-depth experts 'cause there's too much in the Kubernetes ecosystem to fit in one book I believe. But I think if you have no knowledge of it, it's a good way to get started and to get a good understanding of what it takes to build and run applications on Kubernetes.

- As IT professionals and the Cloud era. Sometimes it feels like we don't speak the same language as the rest of the organization. So when stakeholders from finance or other departments start asking about a specific project or teams that Azure costs, they don't always realize how much work is involved in obtaining that information. Sifting through cluttered CSVs is a complex massive metadata in order to manually create custom views and reports. It's a real headache on top of helping you understand and reduce your organization's overall Azure spend, ShareGate Overcast, lets you group resources into meaningful cost hubs and map them to real world business scenarios. This way you can track costs in the way that makes most sense with your corporate structure, whether it's byproduct, business unit, team, or otherwise. It's a flexible, intuitive and business friendly way of tracking Azure infrastructure costs. And it's only available in ShareGate Overcast, find out more on sharegate.com/ITPro

- Yeah, I think there's a lot that goes into the Kubernetes and that ecosystem. And then there's AKS, which is Kubernetes, kind of, sort of but you've got this entire management plane on top of the regular management plane. That's sitting there with Kubernetes itself as a construct, like last week I was doing a bootcamp for partners and was delivering infrastructure sessions, AKS, and we got down into the weeds on something. It was Oh, taints and tolerations. So let's go through and set up some new node pools and talk about ways that we can constrict certain types of compute through our pod specs just to those node pools. And here's how you would do it in Kubernetes land. And by the way, over here in AKS, it's gotta be a little bit different because we have to drive through the management plane for ARM to set the taint on a node. I can't just go into the cluster and do it myself there, so there's just little one-offs like that. So it's worth it totally to understand Kubernetes, and then once you're into it, you come out the other side, and you go like, okay, so now how do I operationalize, all this and make it work on top of Azure, which is a really interesting and fun exercise. Like you mentioned, maybe talking to a database like my SQL, or you wanna run like a Cosmos DB or get out to a function or things like that. Now you're got compute that has to interact with virtual networks and other parts of the plane to get out and do what it needs to do. And that's where AKS starts to get really kind of fun 'cause it breaks quickly.

- If you're not too familiar with Kubernetes and AKS, that's a very common confusion is what do I need to do against the Kubernetes API? Meaning what do I do using kubectl or kube-cuddle or whatever you want to call it. And what do I actually do? When do I call an ARM API? When do I call it Kubernetes API? So when do I do kubectl? And when do I do and an AZ AGS command 'cause certain things if you think about auto scaling, which is something we described in the book itself, there's two axis to all those scaling. One is auto scaling your application itself, for which you would use in an Horizontal Pod Autoscaler which you configure in Kubernetes using kubectl. But then if you want to scale your cluster, you actually do a easy command because the cluster or the scale or something that Azure configures on your behalf. So those little nuances, Scott, you're pretty much correct that there's slight little nuances left and right, and how you have to do certain things.

- One of the interesting things about AKS is, and I found over time kind of working with it. It really drives you towards better practices around deployments and automation in general, you end up with a lot of these features, like you mentioned, the cluster auto scalers. I can go back and potentially, you know, sometimes have features that aren't even going to be enabled in my cluster, unless I turn them on the very first time that I spin that cluster up and start to get it ready. So you quite often end up tearing things down, standing other things back up, and it helps to understand even that whole ecosystem of what needs to be there when I start, like you mentioned ingress controllers. So like, Hey, it might actually help me to have an ingress controller that lives some resource that can live side by side with my cluster, like an app gateway or something like that. And start to externalize some of that infrastructure and make it a little bit easier for me to do migrations and swings and just all the random stuff that comes in with AKS 'cause you're like, Oh, it's Kubernetes until it's not And then you start going down the path. I don't know if you've ever run into anybody. Do you get into AKS engine at all in your book? Are you focused mostly on just the regular "vanilla" Kubernetes offering?

- I described it in like one paragraph to just explain what AKS engine is. But we don't touch it at all in the book and based on my experience and it's pretty strange 'cause I thought that AKS engine was a lot more used, but in the customers that I deal with, either they run AKS itself or they are very brave and they just run their own clusters, at least with the customers that I work with. I don't know if you've seen other things Scott.

- I've seen one or two. I haven't seen anybody who uses AKS engine. I did have some customers in the way back when, who used ACS engine, when that used to be a thing, but I think it's less and less of potentially a driving force today. Like most people have settled on Moby and Docker as a container runtime. So it's not a big deal to come in and say like, okay, AKS this is what you get out of the box. So unless you're driving into something like, you don't wanna do Ubuntu 16.04 or 18.04 for your Linux nodes. Then you've got to go down this other path. And once you start going down the other path, it's like, well, I can use AKS engine, or like you said, I can just spin it up myself. And at that point you might just wanna spin it up yourself because you probably have the expertise with Kubernetes to make that happen. And you understand how to operate it and keep it healthy.

- After that and if you don't have the knowledge, how to operate it yourself, you definitely don't wanna use an AKS engine because it abstracts so many things for you. But we have an AKS engine, that's not a managed service. You just get a template that will deploy a cluster for you. And once the cluster is there, you're on your own. And if you don't know how to operate it, you're gonna have a bad time 'cause Kubernetes is very finicky.

- Do you feel overwhelmed by trying to manage your Office 365 environment? Are you facing unexpected issues that disrupt your company's productivity, Intelligink is here to help, much like you take your car to the mechanic that has specialized knowledge on how to best keep your car running. Intelligink helps you with your Microsoft cloud environment because that's their expertise. Intelligink keeps up with the latest updates in the Microsoft cloud to help keep your business running smoothly and ahead of the curve. Whether you are a small organization with just a few users up to an organization of several thousand employees, they want to partner with you to implement and administer your Microsoft cloud technology. Visit them at intelligink.com/podcast. That's I-N-T-E-L-L-I-G-I-N-K.com/podcast, for more information or to schedule a 30 minute call to get started with them today. Remember Intelligink focuses on the Microsoft cloud, so you can focus on your business.

- I think it's one of those potential things that you run into and you might run into it with human, like certain pieces of software inside of iOS. Like Ben does a lot with SharePoint, so I'm not gonna be able to run my SharePoint farms inside of VM scale sets because it's not... The farm's not gonna like new servers coming up and coming down and not being domain joined. And it's just not a thing that that's gonna be very happy. So you end up managing an IaaS, you still use availability sets and spin things up manually. And maybe you find other pieces of automation to run and AKS engine versus AKS is I think a little bit of that same argument where you're going to say, I either wanna take on the management. Like I wanna be in infrastructure and that's where I want to be, or I want to be in this other managed service. It just gives me that management plane for free. And I think that's gotten a little bit better, particularly for some types of customers, like now that there's the SLA that you can purchase for uptime. There were gaps there that have just been filled over time and it makes it a lot easier and a lot more consumable where you don't have to go down that path and say, well, I'm gonna spend everything up myself in ARM templates and hope for the best later.

- To add to what you just said, Scott, I like how the AKS team evolves really quickly. And I also like how they are sharing all the updates that they're doing. I'm like they have a GitHub repository.

- They have an awesome roadmap.

- Yes, they have their roadmap on GitHub, but they also have a change Log that you can see weekly and you can actually see every week which changes were made against the AKS service. And if you just see the change log, some of the above the minor and major tweaks they do are stunning to just see it on a week to week basis. So if you decide to run your own, either using AKS engine or running your own cluster, all of those little tweaks or things that you will have to engineer and operationalize as well. So using the managed service, it's for me almost a no brainer.

- I mean certainly, if you're gonna go down the path. I'd be 100% with you on that statement. I think that, that one last hesitation was maybe some things around SLA and that last piece of friction was removed. So it makes it very consumable now. You can step in, you can understand what you're getting. They've definitely ramped on things like availability. So thinking about maybe even just resiliency and being able to run with availability zones for your nodes now, being able to do multi-zone node pools, the external controllers, like app gateways or integrations with other external services. It's a really compelling story, especially if you're gonna use all those bits and pieces, once you get in there. If you're just spinning up, like, you're like, I'm gonna play with Kubernetes and I wanna spin up a single node cluster and see what's going on. It's like, yeah, yeah, sure. But if you're gonna be running it at any kind of scale, you might as well be in that service. You're not paying anything for the management plane. You're only paying for the compute and you're gonna pay for that anyway, if you're running in an AKS engine or I as, or how do you wanna do it.

- Yeah, I'm with you there. I think one of the things that you mentioned were just as an interesting topic of discussion. But where I see customers struggle with still today is with multi-region deployments. 'Cause I think the answer from a Kubernetes' perspective is still for multi-region deployments. Just have your CICB pipeline deployed twice, once in each region and figure out your data strategy for a lot of companies that are using, or that are used to technologies like an Azure Site Recovery that can just mirror your VMs to a secondary region where you don't have to do anything except set up ESR. Multi-region it's still pretty difficult for customers to wrap their head around with Kubernetes. And it's not just Azure Kubernetes, that's just Kubernetes in general.

- I would agree with you there and I think some of it goes back to like for better or worse, I've found that AKS drives into a model where you end up spinning up a lot of new clusters relate regardless of the whole multi-region thing. There's always gonna be a reason to create yet another new cluster, you've got YAML, you've got another market plan, which you also got yank yet another new cluster. And you're driven into this model where automation is your friend, whether that simple deployment scripts or richer CIS/CD within there. And then that starts to give you some of that flexibility. Like, yes, it's a pain to potentially, duplicate your pipelines and have ultimately two deployment targets, but it lets you do things like hot-warm, or hot-cold scenarios. You understand the time that it takes to not only spin up a new cluster, but spin up your infrastructure around it. And it lets you potentially use native tooling to help you through some of those migrations along the way. Like if you wanted to use Heptio or Valero to say, like do a backup and restore across clusters, you could totally do that in a multi-region scenario pretty easily. Like Valero has got a really nice integration with AKS where it can snapshot directly to Azure Blob storage. And then you can just pick up that blob with a SAS token, from Valero in a cluster in the same region or in another region and go ahead and restore it back and get it to where it needs to be. There's weird nuance with some zones. And if you've got volume claims and things like that, but if you're interested in a next, next, next kind of AKS cluster, or you understand your deployment model, it's a fairly flexible thing to get into. I mean, I would always wish and yeah, just give me a button that like, does it for me, like ASR does, but I think there's so many moving pieces with Kubernetes in general, that it's a tall ask to get there. Like it's one thing for your service provider to understand the service plane that they offered to you. It's another thing, once you start deploying your pods and running your applications with all your services and everything on top of it.

- I haven't played around with Valero and also I'm learning something new here. When you mentioned it snapshots your AKS cluster, does it make a snapshot of your running confection, basically everything you have deployed or does it make a snapshot of data drives or disks that are attached to pods?

- Yes, you can do things like snapshot, individual pods or SSF. You can back up Valero, they call it a backup. So you can back up individual pods, you can back up entire namespace. So if that namespace like say you threw up a couple of pods and you mounted some managed disks into those pods, it'll take snapshots of those managed disks as well and include those as part of your backup. So when you go and restore into a new cluster into the same cluster, whatever it happens to be, it will bring those managed disks back for you as well. It's pretty nifty.

- Yes, have to play around with it some day.

- They made a pretty slick integration for it. There's certainly some infrastructure that comes with it, but I think there's infrastructure that comes with anything. You want to monitor your containers, you wanna back them up. There's more system pods and things that you need to stand up. But it is for the most part, fairly straight forward, it's like where we do an integration with anything else where maybe you want your cluster to talk to your ACR. Well, you've got to give it some type of our back into ACR so we can go ahead and get at it. Let the service principle for the cluster through same thing with Valero, you've got a storage account, everything just goes into blob storage, and then you can either interact with it through service principles, or, you know, if you want to hook up with an access key through storage Explorer or something like that, and be able to download individual backups or create SAS tokens to them, whatever you need to do, that's all possible as well.

- They just catch you selling blob storage on this podcast. Scott, what's up with that?

- No, I'm not allowed to do that.

- No, I was just kidding, So I think one of the interesting pieces that he highlighted there was all of the additional infrastructure that you still need when you're running in Kubernetes, 'cause Kubernetes itself is just a simple, and I don't want to use the word simple, but it's just an orchestrator. There's so much more that you need to run. Let's say in an application at scale, you need to have your monitoring, your logging, your security and all those things require additional infrastructure. That's why I love Kubernetes and I think it's a really neat technology to work with. I don't think it's a fit all for every application that should be developed and deployed. And you and I and Scott, we actually have some experience working together on moving something from Kubernetes into... As your web apps for containers, which for some applications might actually be a better fit than Kubernetes cluster.

- Yes, so there is something I couldn't talk about now.

- Go ahead. I think we thought of a question I can ask you guys not being a Kubernetes person.

- Outlook Add-ins are a great way to improve productivity and save time in the workplace. And Sperry software has all the add-ins you'll ever need. The safest PDF add-in is a best seller and is great for project backups, legal discovery, and more. This add-in saves the email and attachments as PDF files. It's easy to download, easy to install and Sperry software's unparalleled customer service is always ready to help. Download a free trial @sperrysoftware.com, S-P-E-R-R-Y-S-O-F-T-W-A-R-E.com and see for yourself how great save as PDF is. Listeners can get 20% off their order today by entering the code, CLOUDIT that's CLOUDIT, C-L-O-U-D-I-T all one word at checkout. Sperry software work in email, not on email.

- So as you talk about that, what types of applications are actually suitable for AKS? 'Cause kind of like you said, it's not necessarily one size fits all just like a SharePoint list should never be used as a database. What scenarios or what applications is AKS tend to be a good fit for?

- I'll take it for a swag and then I'll let Scott chime in as well. I think AKS and Kubernetes for just a generalization are a real good fit for applications that have been designed with a microservices mindset from the start. You can run traditional monolithic applications in Kubernetes as well, but it's less optimized for it. And another workload that fits really neatly and Kubernetes is everything that's stateless. If you have any stateless applications or any stateless APIs that you need to run at scale. That works really well in Kubernetes' state and storage are getting better and we've every release of Kubernetes itself. The support for state and the support for storage gets better, but it's still a hassle to manage as a stateful application on Kubernetes. So I typically recommend customers for anything that has to do with states. Just use a managed service if you need my SQL database, don't run my SQL on your cluster. Just run an Azure service for my SQL and that'll take care of a lot of the pain for you. Scott, what do you think are good prime applications for Kubernetes?

- I think that it's a great approach to it. I don't know that it works for too many existing things. Like even if you have something that's like a well architected set of microservices to take those and translate them into Kubernetes means learning potentially this whole new network plane. You know, how does DNS work in Kubernetes? How does service discovery work? What are my considerations around securing these end points, which you probably already have a good handle on over in whatever your world is today. But if you're looking at something that's, Greenfield's like it's an awesome option over there. I love the idea of put your stateless things in Kubernetes and anything that's stateful, try and externalize it as much as you can from the cluster like databases and things like that. Like, it's really easy to say, you know, I have MongoDB today and Mongo is running in a container already. So I'm just gonna bring that over as is to a Kubernetes cluster. Well, now you've got to work through that problem of yet another new cluster and backup restores. It's just easier if you can externalize all that and have your API tier and your web tier and things like that, potentially sitting in your cluster ready, ready to go. And then you can get to that potentially that Polyglot motto, where you really taking advantage of the cloud for what it is and running the right tool for the right job. So where are you gonna be able to seek out your best efficiencies and really move things forward as you do those deployments. I'm not a fan of Kubernetes for the sake of Kubernetes. I think there's too much else out there that does a really good job. So Azure web apps were containers is an interesting thing. It's not the end all be all, but it's interesting for some use cases, I'm a huge fan of Azure container instances. I know that I use those all over the place for infrastructure deployments, even for some of these like smaller microservice applications. Like most people think of like ACI as a single instance thing, but it's really a container group. So I can put a bunch of containers in a little group together. And as long as they meet the compute needs of one container instance, which are pretty darn high, I can put them all together and still make them do what they need to do without having the overhead of entire cluster weighing me down.

- I have a very unpopular opinion here, but there's nothing wrong with running a really nice integrated and automated VM deployment as well.

- Absolutely not, like if that's the thing that works, do what works and what you're gonna be comfortable with and what you're gonna be able to operate. Like if you don't do Kubernetes today, you're gonna spend so much time learning Kubernetes rather than potentially getting started. And being able to actually accomplish what you probably set out to do, whatever your project was focused on. So you can learn those things on the side and get them going. And maybe over time, like you start out, like you said, in VMs, and as you refactor that application and you start to transform it, maybe parts of that, turning to microservices or little task runners or things for you that are out on the side that are better suited to containers in general. Then you can think about the container ecosystem, and if Kubernetes is the thing, Kubernetes is the thing, but there's tons out there. I've always been really impressed with the kind of the breadth and the number of options that you have for running containers in Azure.

- I share to what you said, start up with VMs and then see where it takes you. I'm actually working with a customer. I don't wanna spill their information here, but they moved something to Azure about a year ago, but they did it the right way did it, it's fully automated. They had a Terraform deployment to deploy their application and everything was fully automated and they saw increased amounts. So they deployed more and more using Terraform. And they're right now at a point where they have a fully automated appointment system, but they're realizing that they're using VMs, the density that they're able to reach is not as high as they would want it to be. And they're now looking into Kubernetes as a solution to get more dense applications and they're able to run on virtual machines, but I think they're taking the crawl walk, run approach to... Let's just call it cloud native applications. They started out with the 0.2 VMs in an automated fashion. That's the crawl and now they're walking and experimenting with Kubernetes and in a couple of months they'll be running and having it all in production in Kubernetes at a lower cost, but they didn't just move it one big bang into Kubernetes, which I think is a really smart approach.

- Certainly sounds like it. And in majority of cases, you'll probably find that's the way to go. You've got enough to learn with going to Azure or AWS or GKE, like adopting any of these platforms. That's enough to wrap your head around on top of getting your applications up and running and getting the ROI out of it to have it make sense.

- I agree 100%, there's so much to learn and maybe it's better to stick with something that you know right now than to go full blast and go new and make everything new. That's not gonna work or it's gonna slow you down tremendously.

- And then I think that as you guys were talking through that, that's something that I feel like us as technologists can have a tendency to do sometimes too, is we hear the latest, hot new technology, whether it's AKS or whether it's some new service in Azure or some new software. And too often, I think, 'cause I do it myself. It's like, Oh, we got to get everybody on this. We gotta skip the crawling and the walking and we jumped right to running because it's something new when it's shiny and we want to get everything on it rather than taking a methodical approach, figuring out what's actually best for me. What's the right path I can take and making sure we're choosing the right technology, the right platform, the right services for the job that we need to do.

- I think I can only say yes and I catch myself doing it every single day as well. Just trying to get onto the latest and greatest thing. Like last year at build when WSL 2 was announced, I was one of the first guys to get into the insider preview fostering so that I could get my hands on the shiny new objects.

- It's fun though, I mean, for me, I like it. That's part of the fun of being in technology too, right. As we always get to play with new stuff.

- I think it's about recognizing that that's the case and that's the mindset it's always worth taking a step back and taking a deep breath and I'm like, okay, what's the right thing to do here? Do I go home and play with WSL 2 at night? Or is that the thing that we go out to everybody in the world and say like, "Hey, let's do WSL 2 now," and run insider rings and things like that. And that's not often the case. It's always nice to step in and especially if you've played around with it and you understand that landscape, right? You bring that expertise to the table and all right, everybody, let's calm down. Here's some options let's rationalize them and see if we can't figure out where we're going from here. And quite often have that crawl, walk, run thing where you can put that on the table. And if somebody wants to run, it might not be in their best interest, but at least you've said your piece .

- Yeah. Kind of like you said, even with your book Nills at the beginning, when you were going back and making sure you use the right terms and you weren't jumping too far ahead of where people should start or what they should understand as just, it's good to take that step back every once in a while and just look at the whole picture.

- Yes, definitely, 'cause like when I mentioned the WSL 2 thing that was on my own laptop and if it's broken, I would just do a fresh install that works for a developer machine that doesn't work for a business critical application, so I'm right with you.

- Yeah, all right, well anything else, any other topics you guys would like to talk about or enlightened us on?

- I think that was a good amount.

- That was, that was a lot.

- It was a fantastic talk. The one thing I'd add is, and this is just some shameless self promotion, but the book is available today. You can either get a free copy and I think the link will be in the show notes question mark.

- Yep, we'll put it in there, as long as you give us the link .

- Correct, in the show notes, I already have a link to the copy that I think you're gonna talk about the one on azure.com

- Yes, so there's a free copy that Microsoft is distributing on azure.com and we'll share the link. But if you prefer a print version they're available either directly from the publisher on packthub.com, or you can find a copy on amazon.com as well.

- Excellent, I'll include links to both of those as well. And then people can choose.

- Alright, and we'll put Twitter handles and stuff too. Is that the best place that people have questions, still they wanna reach out to you Twitter?

- Twitter, or LinkedIn, I'm equally active on both. So whatever platform you prefer, you can reach out to me.

- All right, and we'll put both of those in the show notes as well. A link to your LinkedIn profile as well as your profile and Twitter.

- Wonderful, thank you guys for your time, this was fun.

- All right, not a problem, thank you. Well, have a good weekend guys, we'll talk to you later.

- Okay. Thank you, bye.

- If you enjoyed the podcast, go leave us a five star rating on iTunes. It helps to get the word out so more IT pros can learn about Office 365 and Azure. If you have any questions you want us to address on the show or feedback about the show, feel free to reach out via our website, Twitter or Facebook. Thanks again for listening and have a great day.


Sponsors

  • ShareGate – ShareGate’s industry-leading products help IT professionals worldwide migrate their business to the Office 365 or SharePoint, automate their Office 365 governance, and understand their Azure usage & costs
  • Sperry Software – Powerful Outlook Add-ins developed to make your email life easy even if you’re too busy to manage your inbox
  • Office365AdminPortal.com – Providing admins the knowledge and tools to run Office 365 successfully
  • Intelligink – We focus on the Microsoft Cloud so you can focus on your business

Show Notes

About Nills

Nills Franssens is a technology enthusiast and a specialist in multiple open source technologies. He is the author of the second edition of the “Hands-on Kubernetes on Azure” book. He has been working with public cloud technologies since 2013.

In his current position as Senior Cloud Solution Architect at Microsoft, he works with Microsoft’s strategic customers on their cloud adoption. Nills’s areas of expertise are Kubernetes, networking, and storage in Azure.

When he’s not working, you can find Nills playing board games with his wife Kelly and friends, or running one of the many trails in San Jose, California.

You can follow Nills on Twitter @NillsF and on LinkedIn.

About the sponsors

SperrySoftwareLogo Sperry Software, Inc focuses primarily on Microsoft Outlook and more recently Microsoft Office 365, where a plethora of tools and plugins that work with email have been developed. These tools can be extended for almost any situation where email is involved, including automating workflows (e.g., automatically save emails as PDF or automatically archive emails that are over 30 days old), modifying potentially bad user behaviors (e.g., alert the user to suspected phishing emails or prompt the user if they are going to inadvertently reply to all), and increased email security (e.g., prompt the user with a customizable warning if they are about to send an email outside the organization). Get started today by visiting www.SperrySoftware.com/CloudIT
sharegate_logo_2018_600x300 Every business will eventually have to move to the cloud and adapt to it. That’s a fact. ShareGate helps with that. Our industry-leading products help IT professionals worldwide migrate their business to the Office 365 or SharePoint, automate their Office 365 governance, and understand their Azure usage & costs. Visit https://sharegate.com/ to learn more.
Intelligink.com Logo Intelligink utilizes their skill and passion for the Microsoft cloud to empower their customers with the freedom to focus on their core business. They partner with them to implement and administer their cloud technology deployments and solutions. Visit Intelligink.com for more info.