In Episode 174, Ben and Scott dive into Azure App Service for Linux and Azure Web App for Containers as a hosting option for microservices and more.

- [Ben] Welcome to episode 174 of the Microsoft cloud IT Pro podcast recorded live on April, 16, 2020. This is the show about Microsoft 365 and Azure from the perspective of IT Pros and end users. Where we discuss the topic or recent news and how it relates to you. In this episode, we talk about Azure services for Linux and Azure Web Apps for containers as a hosting option for microservices.

- [Scott] You've made it to another Friday.

- [Ben] Is that what day it is?

- [Scott] It is, as Rebecca Black, do you remember that song Friday?

- [Ben] Oh no, please, please

- [Scott] Yes, no.

- [Ben] No, no!!!

- [Scott] I asked her, Well hold on. As she taught us, Friday is the day that comes after Thursday, right? Yesterday was Thursday, Thursday, today it is Friday, Friday partying, boom.

- [Ben] But that would assume that I knew that yesterday was Thursday.

- [Scott] Yeah, well, I'm just telling you like, "Hey, Rebecca Black could help you get through COVID-19."

- [Ben] No, it hurts.

- [Scott] Just to throw, Just to throw that out there for you.

- [Ben] Well, we're almost through it, we have our escape plan now right? As of yesterday?

- [Scott] Yeah.

- [Ben] Although there's no timeline on our escape plan, it's just a plan. This is how we're going to escape at some point on time.

- [Scott] Well It's phased and it's gated. It's very devopsy. They got that going for them,

- [Ben] Yes.

- [Scott] But, you know beaches are reopening today. We got that going for us.

- [Ben] They are. But only for a few hours right? Cause I saw they're opening today at five but then it was like five to 8 p.m. And then 8 a.m. to 11 a.m.

- [Scott] Yes, yeah.

- [Ben] Which means--

- [Scott] Six hours a day.

- [Ben] They are essentially trying to avoid people going out and hanging out all day because let's face it in Jacksonville nobody goes to the beach from eight till eleven, unless you're gonna go for a walk or run or something like that. And same thing from five to eight. It's hey, you can go take a walk, you can go take a run, you can go exercise, but you're not gonna go lay out and party at the beach all day.

- [Scott] Yeah, no, it's one of those I go kinda like two ways about it. Cause you know people are gonna abuse it.

- [Ben] Right.

- [Scott] No matter what. There's gonna be lawn chairs and things out there now people have to patrol it and all those kinds of things but I am genuinely looking forward to just going back to the beach and being able to like stick my feet in the ocean again. Like that's one of the advantages of living here.

- [Ben] Right.

- [Scott] And being close to all of that, so... Yeah, I'm sad I can't take a lawn chair with me, but I'm not gonna be one of those people. But I am totally gonna go stick my feet in the ocean.

- [Ben] Oh did they say no lawn chairs too?

- [Scott] Yeah, they don't want you, like you said, congregating or any kind of chance of that going on.

- [Ben] I missed that part. It was interesting though because of the--

- [Scott] It was in like the Sheriff's webcast about it.

- [Ben] Okay.

- [Scott] So it's not in the official thing but they did call it out in the Sheriff's one so I think they're gonna be kinda going by and talking to people.

- [Ben] So did you read the whole article, this made me laugh. Primarily because it's so interesting to me. I have family in other states and everybody has their different definition of essential activities when they put these stay at home orders in place. It's like, hey you can only go on the beach for essential activities. And in Florida, based on governor, the governor's executive order, essential activities include, participating in recreational activities consistent with social distancing guidelines such as walking, biking, hiking, fishing, running, swimming, taking care of pets and surfing.

- [Ben] So if you're in Florida, surfing, swimming, I mean I get some of it's exercise based but then you look at places like Michigan where they're not even allowed to do any residential or commercial construction projects. Those are considered nonessential. In Florida surfing is essential.

- [Scott] As it should be. I have a weather reporting app. So I don't surf but I go out and go paddle boarding, and the weather reporting app follows all the cameras up and down the beaches and the inner coastal here just so, sometimes it's not about even like wind speed or tide or things like that, it's really just about how calm it is out there cause there's some parts of the inner coastal and things that are--

- [Ben] Do you ever go paddle boarding in like six foot waves?

- [Scott] You know sometimes I would, but my paddle boards and inflatables so just so its a little bit more implorable for me to get around. And it needs to be like really calm and really flat in the ocean, just to kinda keep the stability needed. It's not a paddle board that you wouldn't necessarily surf in on

- [Ben] Got it.

- [Scott] You know if I went out and bought like an eleven foot board that was a hard deck, then I could do some different things, but, yeah. I'm looking forward to having the ability to have just at least one extra option for something to do.

- [Ben] Yeah. No I get it, like, I totally wish we lived closer and who knows. If this goes on much longer we may just drive out there some morning just to let the kids go run around and walk on the beach for a little bit. Because we are starting to get a little stir crazy.

- [Scott] Just bring your bikes, you park at my house, you make the kids ride their bikes to the beach and then make them ride back and they're all good and tired, you know.

- [Ben] Oh, except then they fall asleep in the car on the way home, and then they don't want to take a nap.

- [Scott] Well it just means you get to drive the car around longer. It needs to be run anyway, you know, it's not like you're going out every day anymore.

- [Ben] We were talking the other day, we couldn't even remember the last time we put gas in the car.

- [Scott] It's been a while so I had to run an errand yesterday or at least I thought I had to run an errand, where I was just gonna go pick up my dog's medication, like flea and tick stuff right, we were in and out.

- [Ben] Yep.

- [Scott] And so I was walking out of the house and my wife said, oh don't worry about it. I've got to go out later to pick up groceries so I'll go do it. And I thought to myself, you know, I'm already like halfway out the door. I put my keys in my pocket you know, I got like my wallet in my pocket, this hasn't happened in a long time, this is oh exciting. So I kinda stood there in the garage with the garage door open, looking at my car and I said, all right well I'm just gonna start it up, cause it's gotta be started anyway. And then that turned into, well I should really just drive it around the neighborhood. So one friend I used to like lazy roll, just kinda like basically around the long block which you know takes like 10 minutes to drive around the neighborhood and do all that. And I was like yeah, I got to drive a car today. That was crazy.

- [Ben] Did you put your seat back, roll down your windows and crank up your music too?

- [Scott] No, I really should have though. You know everybody likes to see that Malibu rolling through.

- [Ben] You know you gotta show it off. Oh here is your new quote Scott, this came from Michigan's governor. Speaking of quotes from governors, ''It is better to be six feet apart right now than six feet under.''

- [Scott] Yes. True statement, your outlook on things, yes, that is absolutely true.

- [Ben] All things that make us laugh. Yeah I have all kinds of those. I will say the memes out of all this have been great. So, with all of that, all that said, should we talk about other stuff, like cloudy, cloudy stuff

- [Scott] Yeah.

- [Ben] Since its, is it cloudy today? It supposed to be raining this week. That's the only downside of the beaches opening it's supposed to rain all weekend.

- [Scott] I will make do.

- [Ben] All right, take an umbrella.

- [Ben] Outlook Add-Ins are a great way to improve productivity and save time in the workplace. And Sperry Software has all the Add-Ins you'll ever need. The save as PDF Add-In is the a best seller and is great for project back ups, legal discovery and more. This Add-In saves the email and attachment as PDF files. It's easy to download, easy to install and Sperry Software's unparalleled customer service is always ready to help. Download a free trial at sperrysoftware.com, s-p-e-r-r-y-s-o-f-t-w-a-r-e.com and see for yourself how great save as PDF is. Listeners can get 20% off their order today by entering the code cloudIT. That's cloudit, C-L-O-U-D-I-T, all one word at checkout. Sperry Software work in email not on email.

- [Ben] So Azure Web Apps, its something you said you have been working on recently. And you said we should talk about it.

- [Scott] Yes, that is in fact true. So I have been doing a fun little project at work. It's a little bit of a transformational project of taking an existing series of microservices that are all hosted in Azure Kubernetes service today. And seeing if we can't break those microservices out, and potentially host them in another hosting container or another provider that's gonna allow us to run those Web Apps. And do it with the same performance characteristics and monitoring kinda operational insights we need, but at a much cheaper cost. So if you think about something like AKS, you know you stand up a cluster and typically you want some kind of a HA because it's a cluster, so you gonna want multiple things like multiple nodes in a node pool. You've got an AKS, the way it works is you spin up a cluster, you get a cluster master. The cluster master is a VM but Microsoft doesn't charge you for it, it's part of the management plans so the master is, the maser is free. But you do pay for the underlying compute. So for every node that you spin up in a node pool, then you're gonna pay for each one of those nodes. So, you know you take two DS2 RD2SV3's you know, that cost whatever they cost, 70, 80 dollars a month US,

- [Scott] And okay, well actually no, those are more those are like the DS1's.

- [Ben] D2, aren't those like 140?

- [Scott] Yeah, yeah, they're like 140 right.

- [Ben] The v3's are--

- [Scott] So you spin up two of those and that's 280 dollars and those VM's need to be on all the time, right? So the master talks to them and you want that HA and that's kinda like you're baseline and where you wanna be at. And that's before you start talking about other services you might consume on the side. So like in the case of microservices, they talk to an Azure SQL Database, so there's consumption there. There's storage consumption for diagnostics, there's all the things that you need to spin up with, you know, log analytics and Azure Monitor for containers and all these other things. So there's been some releases over the last year or so in Azure Web Apps that potentially give us a way to host those microservices, natively within Azure Web Apps, and gain some efficiencies. Performance characteristics, we want to keep the same so, you know so we should keep doing load testing, and make sure we're bassline for Latency and average response time, things like that. But from a cost perspective with Azure Web Apps, your unit of compute is your App Service plan. An App Service plan can host multiple Web Apps inside it. So if I can find a good App Service plan tier to host all these microservices in, and keep the performance bassline the same, it should theoretically, be a little bit cheaper, better, faster to operate and stand up along the way. You know there's some things that you are gonna get with Web Apps. Like you're gonna be potentially fixed in storage size. Your App Service Plan determines whether you can use custom domains, and SSL and things like that along the way. But you know if you can find a way to land in a like Linux Web App in a standard service tier, the standards service tier starts at about $70 a month and you know, you can scale up to 10 instances within those. And even if you go to like the premium tier, you get into the premium tier at least here in the US and like east US and east US too, it's 73 bucks a month to start the premium tier. And things like that can scale up to thirty instances, they support auto scale, customer domains, SSL. You can do, kinda all the things you need to do within there potentially to stand up those workloads and get them to where they need to be.

- [Ben] Got it.

- [Scott] So in this case, like we looked and we said okay what's a good like target service plan size, just based on performance characteristics of existing apps cause we were actually kinda leaving a bunch of compute on the table inside those existing AKS nodes. They were kinda sized up a little bit further than they needed to be. But even if we had downsized them, cost would have been a thing particularly when factored in storage and everything else. So we just started out kinda simple and said hey, can we run it in a standard plan like, could I run it in an S1, if I severely restricted the RAM, like an S1 app service plan is one core and 1.75 gigs of RAM. But again it's only 70 bucks a month so if I can run it inside of that for the Core Compute, 280 versus 70, all of a sudden I've got a bunch of flexibility and I can dos some other things there.

- [Ben] Right, because now your Web Apps are naturally highly available. You're not having to go make sure you have two VM's and configure all that for your high availability. It's all just built right into the app service.

- [Scott] It's supported within the App Service, yeah. So there's this concept of instances that you can run. So effectively kinda how many scale units, or you know, what is your horizontal scale look like within the App Service. So by default you usually run with one instance but you can go in and change that configuration and say, I always wanna run with two instances or three instances. And then maybe have things like auto scale rules based on CPU or some other metric that you're gonna target auto scale. And in the case of these service plans right, being able to scale to 10 instances or you know, 30, 50, you know, depending on your service tier.

- [Ben] How do the resources compare then when you're talking like VM's? Because obviously you can also go out and get a VM that has a gig of RAM and a single core. It is really cheap. But then those resources are also having to go to the underlying OS. When you do this in the Web Apps, are you getting essentially the same amount of resources, figure you're still getting like a core and a gig of RAM, but then it's dedicated 100% to your Web Application and to those microservices. It's now having to share those resources with some underlying OS.

- [Scott] Well I mean there's an underlying OS. So you're picking whether you're on Windows or Linux, you're just saying you don't want to have to worry about patching the underlying VM there. So kinda the way it works in Azure Web Apps, have you ever heard of ACU's?

- [Ben] Ah, yes.

- [Scott] All right so an ACU is an Azure Compute Unit, just for those that aren't familiar with it. And they're meant to be a way to bassline or compare CPU performance across these different size and series, right. So when I come out and I say, okay a D2S_v3, and you got what the heck is a D2SV3 and how do I compare that to DS1V2,

- [Ben] Yep

- [Scott] Well you would potentially do that through something like ACU's along the way. They start at A0, actually it's a little bit easier to start at like the A1 kinda family. So A1's are one core to one vCPU, so it's a one to one relationship. And the ACU, the Azure Compute Unit is 100. So now you got like a nice solid whole number that you can work off of there. So when you go to kick your App Service Plan and what your unit of compute is, like if I went in and selected an S2, in the standard series. Well an S2, a line core, so it's tow cores and it's 3.5 gigs of RAM. Then you go like what does that really equate to in CPU performance cause two cores in a D series versus an A series, they're actually gonna have kinda some different metrics to them.

- [Scott] So then I can walk in and I can say, okay, well an S2 is 200 total ACU. It's an A series Compute equivalent, like I know where I have landed in there and I can start to figure out what I'm getting for my money, with the features that are offered to me. Right if I go into the premium tier where I can do like Isolated Networking and some other things, you know, those are gonna be like DV2 series equivalents and you start to get into, all the way up to like, 8X metrics, like you can do like 840 total CPU. Like 420 gigs of memory and in a P3v2. So it gives you a little bit of a bassline and kind of a way to figure it out. So if you looked at say in this case, the node pool, you knew you were running IS or Apache or whatever it is on a VM, and you know what kind of VM you're on, now you can play with it a little bit and see like hey, would I actually be able to step down from a D series to an A series? Which potentially has some significant savings for me? You know, am I really CPU bound or am I memory constrained, dis-constrained? What's the constraint for my application as you stand it up?

- [Ben] Got it, okay so, you have all of that. You've figured out how those resources are laid, how are you going to go from one to the other. But now you actually have to move those microservices, or those containers. What do you have to think about then as you take these microservices that maybe you're running in AKS, and you wanna push them into one of these app service plans, is it just like a lift and shift or is there some reconfiguration that has to go on there? Cause, I honestly completely miss this and I had no idea you could actually run microservices in app service plans now.

- [Scott] So specifically containers, right, we're talking about taking container applications that are already containerized.

- [Scott] And being able to bring them over, so in the case of AKS running the Docker container runtime or kinda move in out ready. You know we should be able to natively come over to a service like Web Apps for containers running on Linux which already has sudoku for runtime as well, and stand a container up the same way. So you could always do microservice hosting, right, just deploy your Web App as the run times or kinda the server side static frameworks, whatever you had going on. In your Azure Web Apps that was fine. But the nice thing here is, we just lifting a container and getting it to where it needs to be. So there's a couple of things like in this particular case that ended up being kinda interesting. So, if you think about standing up a, something like an AKS cluster, so it's a container orchestrator, so it's bringing things to you like service discovery. There's certainly networking components to it. So if I'm deploying a microservice on a cluster, how does traffic from the outside hit an IP? And how does it know to hit that IP and then be routed all the way to that backend service, specifically to you know, microservice A versus microservice B. So that all happens with other service load balancers you might deploy. And typically you need, you want some kind of ingress controller where maybe you can have more play within the routing of that traffic. You might not always want, like, just a standard kinda load balancer service in there. So for this one, it was an existing implementation of traffic so it's just a kinda of way for us to stand up websites and do the routing and things like that within that cluster. But that meant that traffic was going away when we came over to the other side. So in AKS the way everything was set up is there was a root URL. So, you know there was msclouditpropodcast.com and that was kinda the homepage.

- [Scott] And then the API's were all stored in virtual directories, virtual routes underneath there. So you would have like Slash API 1's, Slash API 2's, Slash API 3. So everything was in the same canonical and fully qualified domain name. And when went to Azure Web Apps, well that changed a little bit. Because we can't run multiple containers in, we can't run like a whole container group in Azure Web Apps.

- [Ben] Got it.

- [Scott] Inside the same Web App. So, what was, you know, five or six containers that were all effectively the same website, from a routing perspective. Actually became, five or six separate websites on the other side. So there was some reconfiguration that needed to be done there, right. Like things like, okay, so you know a dynamic configuration for which API end point we talk to, where there was only one URL, now there needed to be, you know one distinct like environment variable that we could set for each API so that you could still talk to the right place and grab the right thing. But the really cool thing there is because its all just containers right, so we can go change the code , we can spin it up, we can create a container image and we can spin that up very quickly within Azure Web Apps. And it turns out that with Azure Web Apps it's potentially even a little bit easier for us to do the deployment. So something like that dynamic configuration where it was all running inside either a native Kubernetes deployment or in this case, everything was being deployed with Helm and Helm charts and you know, you're setting dynamic values for environment variables and things like that. In Azure Web Apps you've got just native just, app settings like there's configuration per Web App. And all you have to do is go set those keys within the app service within the Web App configuration. And they're automatically projected as environment variables within the Web Apps. Within the containers that are running within the Web App and within that runtime. It was super sleek and super kind of turnkey just to spin up a container and run it was a very quick thing to. It felt nice and easy and way easier than potentially, you know depending on your feelings on it, muck with a bunch of YAML to do like the existing Kubernetes and Helm deployments for going through.

- [Ben] As IT professionals in the Cloud era, sometimes it feels like we don't speak the same language as the rest of the organization. So when stakeholders from finance or other departments start asking about a specific project or team's Azure costs, they' don't always realize how much work is involved in obtaining that information. Sifting through cluttered CSV's and a complex mess of metadata in order to manually create custom views and reports. It's a real headache. On top of helping you understand and reduce your organization's overall Azure spend, ShareGate Overcast lets you group resources into meaningful cost hubs and map them to real world business scenarios. This way you can track costs in the way that makes most sense with your corporate structure. Whether it's by product, business unit, team or otherwise. It's a flexible, intuitive and business friendly way of tracking Azure infrastructure costs. And it's only available in ShareGate Overcast. Find out more on sharegate.com/itpro.

- [Ben] Got it so really from that migration standpoint than from moving from one to the other, there's not a whole lot that has to change in your containers other than kinda how those different API's talk to each other. How those different containers would talk to each other. Other than that, a lot of it it's just, more or less a lift and shift of containers into the App service.

- [Scott] Yeah it's really saying hey, can we validate this, like from just a very much like raw proof of concept side. Like do these things work, yes or no. And then what kinds of efficiencies can we light up along the way? So for something like Web Apps on containers, so in this particular case, if you think about kinda container lifetime right, you have a, something like a Docker file that builds the container. So you build that container and then you wanna push that container image to a registry. And then you wanna be able to pull from that registry based on a container name and a tag and things like that. So we were using Azure Container Registry, or ACR, as our container registry. So it's a Docker compatible container registry supports like Docker Push, Docker Pull, things like that along the way. So it's a nice private registry so you don't have to go to Docker Help or anything like that. We looked at ACR and kinda the way existing builds were going on today. So existing builds were happening on build agents as part of like a continuous deployment, like a CI and CD pipeline. And to do those builds, you need to have the Docker daemon, not just the Docker Client, but you need to have like the full Docker runtime to be able to do a Docker build. So that means that you need a Linux server stood up or if you're doing Windows containers, you know you need that compatibility. But you effectively need a unite of compute to do your build for you. So if we looked at, like the CI, CD deployment side that meant we always had to make sure we were picking the right build agent. Did it have the right version of Docker on it? Was it bootstrapped the right way and doing the right things to be able to execute builds based on our Docker files? And it was just like an extra piece that you needed in there so, being that everything is in ACR, we actually wanted to see if we could light up some new options there. So one of the things that happened inside ACR is it has a future called ACR build tasks. So what you can do is you can send a Docker file, basically kinda like, think about like, maybe zip it up in like Atar, or gzip and you can send it up to ACR. And ACR will do your build for you. So the unit of compute is built into ACR itself. I don't need that separate build agent to run Docker build and then do a push to the registry for me. So it kinda simplifies things right. What was potentially two different steps and two different commands and having to worry about logging into a registry and things like that, now in that CI, CD pipeline it's just running an Azure CLI command and making sure that I'm authenticated to the CLI through a service principal or user that has access to that registry. Which is really kinda cool. So why is it good that I can do things, like do this ACR Build Task, or this ACR task directly inside of ACR? Or if you think about one off builds, like one of my struggles with Docker has always been, am I on a computer that has all the tooling installed that I need? Do I already have the Docker Client? Do I already have the Docker daemon? Am I in an environment where I can actually do a Docker build? And sometimes the answer is no, right. You don't know, you might be at like a customer's environment and on like a laptop that they provided for you and you can't even install Docker on it, right. So not a lot is locked down. Well the cool thing about ACR Build tasks is it's the unit for compute and its doing the Docker build for me just based on my Docker files. That means that I can perform Docker builds, from places where I don't have access to the Docker daemon. So for me, that means that I can go in into this environment if you think about just Azure, I can fire up Cloud Shell now, and all I need to do is have access to that Docker file from Cloud Shell so I can still do something like a git clone, and clone that Docker file out of that repository it lives in. And then I can just send that file to ACR build task. There is no way I could build a container natively inside a Cloud Shell, cause Cloud Shell is already running in a container, right. You know its like too many levels of virtualization removed.

- [Ben] Its container inception.

- [Scott] Yeah, so all of a sudden you've gained this really cool new ability. And it has simplified that pipeline, potentially right. What was two distinct actions, a build and a push, now it's just becomes a single action which is a build task for me. And I'm off to the races and ready to go. Which is really, really kinda cool. Like it really simplified overall environment deployment. Cause now from a deployment perspective in the past, you would to like to stand up say, like a new Dev environment for a developer, they had to have all that tooling locally. Now that everything's is 100% Azure native including the Docker builds, now we were able to go to those developers and we can just give them a Bash script and they can go into Cloud Shell and run a Bash script and come back in 10 minutes and everything is just kinda done for them.

- [Ben] Nifty. So you could do all of this now from a Chromebook?

- [Scott] Yeah, oh yeah. Yeah, no I've been living in like,

- [Ben] An iPad.

- [Scott] Just a web browser and yeah it's all been going swimmingly. I been really impressed with it. There's certainly somethings that have changed along the way. I think that particularly like operationally. So it wasn't so much can we do, it's yeah it can be done and certainly there's that cost component to it. But can you continue to run the service and the, in the way that you need to run it. So if you think about kinda of AKS PaaS service. But ultimately you get a lot of insight there right? You can dig pretty deep under those VM's and things like Azure Monitor containers with the dependency agents, like they're giving you some pretty raw numbers that then you can then assume in tooling that maybe you're already familiar with like, Grafana or Prometheus for doing dashboards and kinda optics for operations. By going to kinda of 100% Azure native services, some of that changed. So potentially like telemetry that you get out of that Web App for container, well because it's running in Azure Web Apps if you haven't instrumented the container, so it's talking to something else like app insights. Which in this case it wasn't instrumented for that. You know rather than saying, okay lets make another big code change and implement app insights across the board, lets see what we can get out of native Azure metrics. So metrics through like Azure Monitor, are just based on the existing resource providers, so in this case, Microsoft websites. So what are the metrics that I can get out of Microsoft websites? And now that I don't have Grafana or Prometheus for my dashboard, you know what can we do with maybe building out Azure dashboards or using things like workbooks to get those visualizations back to were they need to be. And so you know the security team and the operations team can understand where things lay out for these applications in their updated architecture. And kind of what that looks like. That might change things a little bit, you might be locked in to say, you might be look at, like CPU time as a metric from a virtual machine. Well in Azure Web Apps you know you can do an aggregation across like average CPU usage. Effectively the same thing. But it might look a little bit different you might have to figure out just what is that difference and does it fit my need, and is it really there and ready to go for me.

- [Ben] Got it. Very cool. More stuff to play with. I don't have time to play with all this stuff.

- [Scott] Yeah it was really sleek as just kinda of a validation exercise, to say, hey do these things work? And can we figure out where the pain points are gonna be, or you know potentially where those rough edges are gonna be with that service, you know, app service on containers are on the way. So we certainly ran into things, like we had a container which was just a .net core. But output was one of the microservices, but that was just through like it's initial build process it was ready to come up on port 8080 internally. And then you know you're just doing port mapping at the service level in AKS to say like, no it's really 443 mapping to 8080 on this container and blah blah blah. So we had some things to get over like that and there was another .net core, another .net core API that was misbehaving a little bit in Azure Web App. So it turns out that Azure Web Apps when it goes to start your container , one of the ways it figures out container health for a website is just by effectively doing pings into it. So by pinging your website just on port 80. So for some of these API's just based on routing, cause they were at /API/1, /API/2, things like that. If you just went to /API/1, just like the root homepage or root route of the API, we weren't actually returning any responses so things like Azure Web Apps would die. And it would just fail to container. It would say I can't start it, because the website is not up. It's like hold on, the websites there, you just need to look in this other place. Or we needed to, in some case, like some cases like shut off availability checks just to get the apps up an running. And then over time we can fix those errors and kinda get them to where they need to be. And do that more transformational remediation. So from a lift and shift, or just a straight up like re-host prospective going from AKS to Web Apps for containers, super minimal change. Like if I hadn't had to change those environment variables, they wouldn't have been a reason to change anything along the way. Right, it would have been just a straight one to one. And then potentially these other more transformational changes pick up and are like hey, lets make that API run the right way so that we can keep availability checks on cause that's kind of an important thing.

- [Ben] Right. Very cool.

- [Scott] Yes.

- [Ben] Some thing of an exercise.

- [Scott] It was definitely different. It was something to do that was potentially a little bit different from virtual machines and tag in into some new stuff. And potentially solve some pain points. Like, honestly, like I walked away, and I was, at the end of it I was like, this ACR build tasks thing, I can use this all the time in my workflow now. Even for demo's and webinars and things like that. Where now I don't need to worry about, you know was my hyper VVM for Ubuntu up to date and ready to go. Cause I always had to have a separate VM to do that. You know you can't do it inside of a WSL1 today, like you can't do that Docker build cause again you don't have the Daemon there. So it just simplified like a bunch of things and I just thought that was like one of the coolest features cause it's gonna make my life a lot easier for demonstrations and webinars and everything else being a little selfish.

- [Ben] Yeah, definitely. Alright, sounds good. Well thanks for that episode.

- [Scott] Yeah, no worries.

- [Ben] Another from Azure one. So go enjoy your weekend now. Go get out to the beach. Get some fresh air.

- [Scott] Yeah, it is one of my goals.

- [Ben] Alright, sounds good. Well enjoy, good talking to you. And will talk to you again next week.

- [Scott] Thanks.

- [Ben] If you enjoyed the podcast go leave us a five star rating in iTunes. It helps to get the word out so more IT pro's can learn about Office 365 and Azure. If you have any questions you want us to address on this show or feedback about this show, feel free to reach out via our website, Twitter or Facebook. Thanks again for listening and have a great day.

Sponsors

    • Sperry Software – Powerful Outlook Add-ins developed to make your email life easy even if you’re too busy to manage your inbox
    • ShareGate – ShareGate’s industry-leading products help IT professionals worldwide migrate their business to the Office 365 or SharePoint, automate their Office 365 governance, and understand their Azure usage & costs
    • Office365AdminPortal.com – Providing admins the knowledge and tools to run Office 365 successfully
    • Intelligink – We focus on the Microsoft Cloud so you can focus on your business

Show Notes

About the sponsors

SperrySoftwareLogo Sperry Software, Inc focuses primarily on Microsoft Outlook and more recently Microsoft Office 365, where a plethora of tools and plugins that work with email have been developed. These tools can be extended for almost any situation where email is involved, including automating workflows (e.g., automatically save emails as PDF or automatically archive emails that are over 30 days old), modifying potentially bad user behaviors (e.g., alert the user to suspected phishing emails or prompt the user if they are going to inadvertently reply to all), and increased email security (e.g., prompt the user with a customizable warning if they are about to send an email outside the organization). Get started today by visiting www.SperrySoftware.com/CloudIT
sharegate_logo_2018_600x300 Every business will eventually have to move to the cloud and adapt to it. That’s a fact. ShareGate helps with that. Our industry-leading products help IT professionals worldwide migrate their business to the Office 365 or SharePoint, automate their Office 365 governance, and understand their Azure usage & costs. Visit https://sharegate.com/ to learn more.
Intelligink.com Logo Intelligink utilizes their skill and passion for the Microsoft cloud to empower their customers with the freedom to focus on their core business. They partner with them to implement and administer their cloud technology deployments and solutions. Visit Intelligink.com for more info.

Show Transcript