Home » Microsoft Build 2014 Developer Conference Transcript – Day 2 (Full)

Microsoft Build 2014 Developer Conference Transcript – Day 2 (Full)

This year, Build 2014, an annual developer conference held by Microsoft, was held at Moscone Center in San Francisco from April 2 to April 4, 2014. Here is the Day 2 full keynote presentation of Build 2014 conference. 

 

Operator: Ladies and gentlemen, please welcome Executive Vice President, Cloud and Enterprise Scott Guthrie.

Scott Guthrie – EVP, Microsoft Cloud and Enterprise Group

Good morning, everyone, and welcome to day two of Build.

We now live in a mobile-first, cloud-first world. Yesterday, we talked about some of the great innovations we’re doing to enable you to build awesome client and devices experiences. Today, I’m going continue that conversation and talk about how you can power those experiences using the cloud.

Azure is Microsoft’s cloud platform and enables you to move faster and do more. A little over 18 months ago here in San Francisco, we talked about our new strategy with Azure and our new approach, a strategy that enables me to use both infrastructure as a service and platform as a service capabilities together, a strategy that enables developers to use the best of the Windows ecosystem and the best of the Linux ecosystem together, and one that delivers unparalleled developer productivity and enables you to build great applications and services that work with every device. Since then, we’ve been hard at work fulfilling that promise.

Last year was a major year for Azure. We shipped more than 300 significant new features and releases. 2014 is going to be even bigger. In fact, this morning during the keynote, we had more than 44 new announcements and services that we’re going to be launching. It’s going to be a busy morning.

Beyond just features, though, we’ve also been hard at work expanding the footprint of Azure around the world. The green circles you see on the slide here represent Azure regions, which are clusters of datacenters close together, and where you can go ahead and run your application code.

Just last week, we opened two new regions, one in Shanghai and one in Beijing. Today, we’re the only global, major cloud provider that operates in mainland China. And by the end of the year, we’ll have more than 16 public regions available around the world, enabling you to run your applications closer to your customers than ever before.

As we’ve seen our features and footprint expand, we’ve seen our adoption of Azure dramatically grow. More than 57 percent of the Fortune 500 companies are now deployed on Azure. Customers run more than 250,000 public-facing websites on Azure, and we now host more than 1 million SQL databases on Azure.

More than 20 trillion objects are now stored in the Azure storage system. We have more than 300 million users, many of them — most of them, actually, enterprise users, registered with Azure Active Directory, and we process now more than 13 billion authentications per week.

We have now more than 1 million developers registered with our Visual Studio Online service, which is a new service we launched just last November.

Let’s go beyond the big numbers, though, and look at some of the great experiences that have recently launched and are using the full power of Azure and the cloud.

Titanfall was one of the most eagerly anticipated games of the year, and had a very successful launch a few weeks ago. “Titanfall” delivers an unparalleled multiplayer gaming experience, powered using Azure.

Let’s see a video of it in action, and hear what the developers who built it have to say.

[Video Presentation]

One of the key bets the developers of “Titanfall” made was for all game sessions on the cloud. In fact, you can’t play the game without the cloud, and that bet really paid off.

As you heard in the video, it enables much, much richer gaming experiences. Much richer AI experiences. And the ability to tune and adapt the game as more users use it.

To give you a taste of the scale, “Titanfall” had more than 100,000 virtual machines deployed and running on Azure on launch day. Which is sort of an unparalleled size in terms of a game launch experience, and the reviews of the game have been absolutely phenomenal.

Another amazing experience that recently launched and was powered using Azure was the Sochi Olympics delivered by NBC Sports.

NBC used Azure to stream all of the games both live and on demand to both Web and mobile devices. This was the first large-scale live event that was delivered entirely in the cloud with all of the streaming and encoding happening using Azure.

Traditionally, with live encoding, you typically run in an on-premises environment because it’s so latency dependent. With the Sochi Olympics, Azure enabled NBC to not only live encode in the cloud, but also do it across multiple Azure regions to deliver high-availability redundancy.

More than 100 million people watched the online experience, and more than 2.1 million viewers alone watched it concurrently during the U.S. versus Canada men’s hockey match, a new world record for online HD streaming.

[Video Presentation]

I’m really excited to invite Rick Cordella, who is the Senior Vice President and General Manager of NBC Sports Digital, on stage to talk with us a little bit about the experience and what it meant.

So the first question I had, can you tell us a little bit about what the Olympics means to NBC?

Rick Cordella – SVP and General Manager, NBC Sports Digital

It’s huge. I mean, even looking at that video right there, I’m taken back to a month ago and how special it is, what it means to the athletes. But what it means to NBC is big. It’s enormous for our company. Steve Burke, our CEO, calls it the heart and soul of the company. And if you consider how much content NBC, how many events NBC is connected to, that’s a pretty bold statement.

Six months out, we actually take our peacock icon and adorn it with the Olympic rings. So for every piece of content that appears on the NBC broadcast network, the Olympic rings are present. It’s big for our company.

Scott Guthrie – EVP, Microsoft Cloud and Enterprise Group

Can you talk a little bit about the elastic scale and how the cloud is kind of key to enabling it?

Rick Cordella – SVP and General Manager, NBC Sports Digital

Sure. You mentioned that semifinal game between the U.S. and Canada that Friday afternoon. To be able to scale to that massive amount of volume is enormous. Setting records. You go from a curling match may have just one stream going on, to over 30 concurrent streams.

And then, oh, by the way, you have five EPL games happening at the same time, a PGA tour tournament that’s happening, and you really need that planning to go into place as we scale out across 2,000-plus events with the NBC sports group.

Scott Guthrie – EVP, Microsoft Cloud and Enterprise Group

Can you talk just a little bit in terms of — clearly, it’s a big deal for NBC. How critical is it to have an enterprise-grade platform deliver it?

Rick Cordella – SVP and General Manager, NBC Sports Digital

The company bets about $1 billion on the Olympics each time it goes off. And we have 17 days to recoup that investment. Needless to say, there is no safety net when it comes to putting this content out there for America to enjoy. We need to make sure that content is out there, that it’s quality, that our advertisers and advertisements are being delivered to it. There really is no going back if something goes wrong.

Scott Guthrie – EVP, Microsoft Cloud and Enterprise Group

Cool, I’m glad it went well.

Rick Cordella – SVP and General Manager, NBC Sports Digital

Yeah. No, I mean, Azure — honestly, I know I’m speaking here, but Azure really played a critical role in this happening. It’s not as if you can just pick a company out there that has a product that you don’t trust to pull off an event of this magnitude. These are the largest digital events that any company pulls off. And we’re really happy that we worked closely with Microsoft Azure this time around.

Scott Guthrie – EVP, Microsoft Cloud and Enterprise Group

Great. Thanks, Rick.

Rick Cordella – SVP and General Manager, NBC Sports Digital

Thanks, Scott.

Scott Guthrie – EVP, Microsoft Cloud and Enterprise Group

So we’ve talked at a high level about what you could do with Azure. Let’s now dive into specifics.

One of the things that makes Azure unique is its rich set of infrastructure as a service and platform as a service capabilities and how it enables developers to leverage these features together to build great applications that can support any device.

Let’s go ahead and look at some of the great new enhancements we’re releasing this week in each of these different categories.

First up, let’s look at some of the improvements we’re making with our infrastructure features and some of the great things we’re enabling with virtual machines.

Azure enables you to run both Windows and Linux virtual machines in the cloud. You can run them as stand-alone servers, or join them together to a virtual network, including one that you can optionally bridge to an on-premises networking environment.

This week, we’re making it even easier for developers to create and manage virtual machines in Visual Studio without having to leave the VS IDE: You can now create, destroy, manage and debug any number of VMs in the cloud.

Prior to today, it was possible to create reusable VM image templates, but you had to write scripts and manually attach things like storage drives to them. Today, we’re releasing support that makes it super-easy to capture images that can contain any number of storage drives. Once you have this image, you can then very easily take it and create any number of VM instances from it, really fast, and really easy.

Starting today, you can also now easily configure VM images using popular frameworks like Puppet, Chef, and our own PowerShell and VSD tools. These tools enable you to avoid having to create and manage lots of separate VM images. Instead, you can define common settings and functionality using modules that can cut across every type of VM you use.

You can also create modules that define role-specific behavior, and all these modules can be checked into source control and they can also then be deployed to a Puppet Master or Chef server.

And one of the things we’re doing this week is making it incredibly easy within Azure to basically spin up a server farm and be able to automatically deploy, provision and manage all of these machines using these popular tools.

What I want to do here is invite Mark Russinovich on stage to actually show off how you can use all this functionality and some of the cool things you can now do with it. Here’s Mark.

Mark Russinovich – Microsoft Technical Fellow in Windows Azure

I thought we were going to wear black today, Scott.

Scott Guthrie – EVP, Microsoft Cloud and Enterprise Group

Yeah.

Mark Russinovich – Microsoft Technical Fellow in Windows Azure

Oh, well. Morning, everybody. So let’s get started. I’m going to show you how easy it is to create a virtual machine from inside of Visual Studio here by going to the Server Explorer, going down to virtual machines, right clicking, and then you can see a new menu item there, create virtual machine.

Clicking it launches a wizard experience that looks a lot like the portals wizard experience, but I’m doing it right from here inside of Visual Studio. First step, take a subscription to deploy into. Second step, take an operating system image. I’m, of course, going to pick the best one on this list, the latest version of 2012 R2.

Then I pick a virtual machine name. So I’ll give it a nice, unique name here. Provision a user account to log into the machine if I need to. Either create a new cloud service, or deploy into an existing cloud service. I’ll go ahead and pick an existing one. And then pick a storage account, into which the operating system disk gets created. I’m going to, again, pick an existing storage account.

Press next. And the final step would be to configure any network ports that I want to open up on the machine. But I’m good with the default, so I’ll just press “create” and let it launch. In a few minutes, we’ll have a virtual machine ready to go.

But that wouldn’t be that cool if that’s all you could do from Visual Studio is just create and delete virtual machines.

What’s even better is that you can also debug your virtual machines right from inside of Visual Studio from your desktop.

And to demonstrate that, I’ve got a Web service rich client application here. It’s an expense submission application. You can see I’ve loaded up the client and the service into Visual Studio.

I’m going to launch the client here, just so you can see what it looks like. And it’s already prepopulated here with some expenses from my day yesterday here in San Francisco.

Now, you can immediately see that something’s wrong. And that is that when I go to a Mexican restaurant for lunch, the margaritas that I drank come out to way more than $12. So I’ll fix that right there.

And now let me switch to the virtual machine that the service is running. And you can see right here, it’s ready for an expense submission.

ALSO READ:   Jennifer Doudna: How CRISPR Lets Us Edit Our DNA at TED Talk (Transcript)

And I’m going to switch back. And let’s just presume that I’ve got a bug inside of the submit expense method up in that service. And here we can see, submit for approval. I’m going to set a break point right there at the entry point.

And now my next step is to connect Visual Studio up to that machine in the cloud so I can interactively debug it.

The first thing I need to do is to enable the bugging in that virtual machine. And you can do that inside of Visual Studio by clicking on a virtual machine and selecting the “enable debugging” menu item.

What this does is takes advantage of the Azure agent that sits inside of the virtual machine that I created to dynamically inject the Visual Studio debugging client. And once it’s injected, then I can go ahead and use Visual Studio to connect to it and debug the code that’s running inside that virtual machine.

That machine that’s running that expense service already has that debugger agent injected into it. So all I have to do is right click, say attach debugger, and now I’m going to retrieve a list of processes running in that virtual machine, the one I’m interested in, the one running the service, which is right here. Expense IT service.

I press “attach,” and at this point you can see Visual Studio is ready to hit that break point. So when I go back to the rich client and click “submit” you see I just hit the break point live, and now I can debug as if this thing was on my local desktop. So no more installing Visual Studio in the server.

So the next thing that Scott talked about was the power of creating VM images from your running VMs that consist of complex setups with multiple disks. I’ve got a virtual machine up here called a RigVMBuild B. And you can see that it’s got an OS disk and a whole bunch of data disks attached to it.

If I wanted to create multiple versions of this with those copies of the data on that disk, of course I could scrape together PowerShell scripts to do it. But with the new commandlets we’ve got and the new REST APIs we’ve got in Azure, I can do that very easily.

And so here’s a PowerShell command that invokes the save Azure VM image commandlet. I’m going to go ahead and click that. And what that’s going to do is launch off a capture of that VM into a VM image that I can then deploy from.

I’ve already actually created a VM image from that machine and I’m going to reference it right here. You can see this is a new Azure VM with a config here that specifies an image name, and that image name is the image name of a previous capture.

What this is doing is provisioning a new copy from that new VM instance from that VM image capture that I made previously. And as Scott had mentioned, one of the cool scenarios for this is if I’ve got a test environment where I want to test multiple different copies, maybe throw a different test at each one in parallel. I can go stamp out multiple instances of that same VM image.

Another way to use this, though, is kind of as a snapshot restore capability. So if I’m debugging and I want to go back to a previous point in time, I can capture an image at a good state, go do some work in the VM, then delete that VM and create a new instance back from that previous state. So two really cool scenarios enabled with this just with those simple commands.

I’ll go back to the portal now to see if we can see that VM coming up, spinning up. And there it is, a RigDBVM. Hold on, let me refresh. And we should see that VM show up here, and there it is, it’s retrieving status, there we go, starting. This is the VM that I just provisioned.

If I go take a look at its details, you can see that it’s got all those disks just like that original one did, just with the simple command.

The final thing I want to talk about is the integration with configuration management systems. Specifically, in this case, Puppet. With the collaboration with Puppet Labs, we’ve made it very easy to go and create Puppet Masters from within Windows Azure by adding a Puppet Master image to our platform image repository.

So you can see down here, we’ve got a Puppet Labs section. And if I click that, I’ll be able to launch a Puppet enterprise Puppet Master server inside of Windows Azure.

But we’ve also made it easy to create Puppet agents, machines running the Puppet agent that connect to a Puppet Master. And that’s what I’m going to do now is switch over to here and create a Windows-based virtual machine.

Give it a name, type in my name and password. Press next. And at this point, I’m going to see if the defaults work, because nobody’s taken that name. And the final step is to install the VM agent. If we’ve got the VM agent installed, we can use that same agent technology to inject other code into that VM.

And the one that I’m going to inject for this demo is Puppet. You can see we’ve also partnered with Chef to get Chef agent support in there. And now at this point, all I have to do is tell it where the Puppet Master is. And I’ll just type in an example Puppet Master name. And at that point, when I go and provision that virtual machine, that Puppet agent is going to launch and connect to that Puppet Master and I’ll be able to manage it from there and deploy code into it.

To actually show you deploying code into virtual machines on Azure from a Puppet Master, I’m going to invite up Luke Kanies, the CEO of Puppet Labs, to take us further into the demo. Luke?

Luke Kanies – CEO, Puppet Labs

So we at Puppet Labs exist to help you automate the configuration and ongoing management of your — as we like to think of them — stupid computers. And the goal is to allow you to do a lot less firefighting and a lot less script running and a lot less maintenance of things like golden images and things like that and a lot more time getting your software in front of your users more often, more quickly, and a lot less hassle.

One of the great things about Puppet is that it works on physical machines, virtual machines, it works in the public cloud, it works on private cloud, and really any combination of that. It works on pretty much any operating system you want to manage, anything you’re not embarrassed to admit you run, we can probably manage it whether it’s a network device or a firewall or a standard computer. And does this at massive scale. We’ve got tens of millions of machines under management by Puppet. And we’ve got sites that are more than 100,000 servers managed by just one Puppet infrastructure.

We’ve got great companies who are using Puppet to do interesting work with the datacenters, including NASA, GitHub, Intel, Bank of America, and a lot more.

So we’re excited to bring Puppet Enterprise to Azure. And I’m going to give a small example of what it looks like to use it here.

So this is a relatively standard — this is just the normal interface to using Puppet Enterprise. You can see here we’ve got some machines under management, small number here, and the green bars are every time a Puppet agent runs and updates your infrastructure, it says, hey, something happened. In the blue case, it says we actually had to make some sort of change to bring you into synch. In the green case, it just says we checked, everything is great, we didn’t have to do any extra work.

So in this case, I’m going to make some changes to our Windows Servers. I’m going to go to the Windows Server group. And what we want to do here is we’ve got an example machine that is running — everyone uses the task manager for various things. I’ve heard there’s a better version of the task manager out there.

And so what we’re going to do is see what it takes to update those. It’s a pretty straightforward operation with Puppet. We go in and in Puppet, the class is essentially the way of referring to the code associated with the function we want to do. So we’ve got a Puppet module named Microsoft Sys Internal.

Mark Russinovich – Microsoft Technical Fellow in Windows Azure

Ah, you do have good taste in tools.

Luke Kanies – CEO, Puppet Labs

And with this, we associate this class with, hey, we want to do this work, the work associated with this with all our machines. And normally this change would propagate out to your whole infrastructure in the space of probably around a half an hour. If you’ve got 100,000 machines you have under management, you probably don’t want all of them hitting your servers at exactly the same time.

In this case, though, we’ve got the system running on a relatively tighter timeline. And so we go look at it, and now we’ve got the better, far more powerful version running.

This is a very small example of what you can do. You can manage complete application stacks. You can manage the infrastructure, all the kind of laying the bits down so that the system works up to I’ve got my whole application built, I’ve got my database, my application server, my Web server, things like that.

So it really is a powerful system for getting all the work done from the bare OS up to a functioning, running application. This is especially important in the cloud where the whole goal here is if you can get your virtual machine up in five minutes, but it still takes you three weeks to configure the server, that kind of defeats the point. So the goal here is to get the speed of configuration at the same rate as the speed of building the machine itself.

Scott Guthrie – EVP, Microsoft Cloud and Enterprise Group

Great. Well, this awesome collaboration between Puppet Labs and Microsoft, bringing Puppet into Azure is enabling our customers that use tools like Puppet to get started on Azure using their existing processes.

To talk about that, I’m going to ask on stage Daniel Spurling, who is from Getty Images to talk about how this is helping Getty move to Azure.

Daniel Spurling – VP of Tech Services, Getty Images

Thank you. Good morning, everyone. We are Getty Images, we serve more than 1.5 million active customers in more than 185 countries, providing the best conceptual and editorial content in the world.

We brought stock photography into the digital age, pioneering the ability to license content and imagery online.

We push the envelope every day. When a photographer around the world takes a brilliant photo, we want that photo to be available for anyone in the world within minutes.

Because these assets you see on the screen here, they have a job to do. They evoke emotion. Corporations around the world, companies of every single type, rely on us to help them tell their story.

We also recently launched an embed product which allows anyone, anywhere, free of charge, to utilize over 40 million pieces of our high-quality imagery for noncommercial use. This is our first step into the consumer market.

Now, we ingest and manage millions of images and videos. With millions of new pieces of content added every single day. And with that, for Getty to succeed, technology must have the same scale, agility, and global footprint as our company does in order to support our massive content flow, from any corner of the world, from Tokyo to Rio De Janeiro.

We’re excited about the Microsoft Azure Cloud Platform, which works with our tools such as Puppet. It gives us the global scale and infrastructure anywhere that we need it, be it Japan or Brazil. We today actively use Puppet for automation and configuration management. And this will give us the agility and consistency we need to move across environments when we burst from our datacenter into an external cloud provider, landing it right every single time.

As our business grows and our requirements expand, we will need to continue to support more content across more devices, and therefore, our infrastructure needs to scale with us dynamically and without friction.

For that, the cloud that we choose and the tools that we use must truly be open and seamlessly support both our Windows and Linux environments. That’s why we at Getty are excited about the Puppet Labs and Microsoft partnership, and the value that Puppet and Azure can bring to our business and customers. Thank you.

Scott Guthrie – EVP, Microsoft Cloud and Enterprise Group

So infrastructure as a service gives you a very flexible environment and enables you to manage it however you want.

Actually, before I go there, a whole bunch of announcements here. You can see, what we saw here this morning with infrastructure as a service capabilities, we’ve really made a bunch of improvements to it. And just from a short list here of a number of them, you saw capturing deployed, Visual Studio integration, the Puppet and Chef support.

We’re also excited to announce the general availability of our auto-scale service, as well as a bunch of great virtual networking capabilities including point-to-site VPN support going GA, new dynamic routing, subnet migration, as well as static internal IP address. And we think the combination of this really gives you a very flexible environment, as you saw, a very open environment, and lets you run pretty much any Windows or Linux workload in the cloud.

ALSO READ:   Bernard Amadei: Technology with Soul at TEDxMileHigh (Full Transcript)

So we think infrastructure as a service is super-flexible, and it really kind of enables you to manage your environments however you want. We also, though, provide prebuilt services and runtime environments that you can use to assemble your applications as well, and we call these platform as a service capabilities.

One of the benefits of these prebuilt services is that they enable you to focus on your application and not have to worry about the infrastructure underneath it.

We handle patching, load balancing, high availability and auto scale for you. And this enables you to work faster and do more.

What I want to do is just spend a little bit of time talking through some of these platform as a service capabilities, so we’re going to start talking about our Web functionality here today.

One of the most popular PaaS services that we now have on Windows Azure is something we call the Azure Website Service. This enables you to very easily deploy Web applications written in a variety of different languages and host them in the cloud. We support .NET, NOJS, PHP, Python, and we’re excited this week to also announce that we’re adding Java language support as well.

This enables you as a developer to basically push any type of application into Azure into our runtime environment, and basically host it to any number of users in the cloud.

Couple of the great features we have with Azure include auto-scale capability. What this means is you can start off running your application, for example, in a single VM. As more load increases to it, we can then automatically scale up multiple VMs for you without you having to write any script or take any action yourself. And if you get a lot of load, we can scale up even more.

You can basically configure how many VMs you maximally want to use, as well as what the burn-down rate is. And as your traffic — and this is great because it enables you to not only handle large traffic spikes and make sure that your apps are always responsive, but the nice thing about auto scale is that when the traffic drops off, or maybe during the night when it’s a little bit less, we can automatically scale down the number of machines that you need, which means that you end up saving money and not having to pay as much.

One of the really cool features that we’ve recently introduced with websites is something we call our staging support. This solves kind of a pretty common problem with any Web app today, which is there’s always someone hitting it. And how do you stage the deployments of new code that you roll out so that you don’t ever have a site in an intermediate state and that you can actually deploy with confidence at any point in the day?

And what staging support enables inside of Azure is for you to create a new staging version of your Web app with a private URL that you can access and use to test. And this allows you to basically deploy your application to the staging environment, get it ready, test it out before you finally send users to it, and then basically you can push one button or send a single command called swap where we’ll basically rotate the incoming traffic from the old production site to the new staged version.

What’s nice is we still keep your old version around. So if you discover once you go live you still have a bug that you missed, you can always swap back to the previous state. Again, this allows you to deploy with a lot of confidence and make sure that your users are always seeing a consistent experience when they hit your app.

Another cool feature that we’ve recently introduced is a feature we call Web Jobs. And this enables you to run background tasks that are non-HTTP responsive that you can actually run in the background. So if it takes a while to run it, this is a great way you can offload that work so that you’re not stalling your actual request response thread pool.

Basically, common scenario we see for a lot of people is if they want to process something in the background, when someone submits something, for example, to the website, they can go ahead and simply drop an item into a queue or into the storage account, respond back down to the user, and then with one of these Web jobs, you can very easily run background code that can pull that queue message and actually process it in an offline way.

And what’s nice about Web jobs is you can run them now in the same virtual machines that host your websites. What that means is you don’t have to spin up your own separate set of virtual machines, and again, enables you to save money and provides a really nice management experience for it.

The last cool feature that we’ve recently introduced is something we call traffic manager support. With Traffic Manager, you can take advantage of the fact that Azure runs around the world, and you can spin up multiple instances of your website in multiple different regions around the world with Azure.

What you can then do is use Traffic Manager so you can have a single DNS entry that you then map to the different instances around the world. And what Traffic Manager does is gives you a really nice way that you can actually automatically, for example, route all your North America users to one of the North American versions of your app, while people in Europe will go routed to the European version of your app. That gives you better performance, response and latency.

Traffic Manager is also smart enough so that if you ever have an issue with one of the instances of your app, it can automatically remove it from those rotations and send users to one of the other active apps within the system. So this gives you also a nice way you can fail over in the event of an outage.

And the great thing about Traffic Manager, now, is you can use it not just for virtual machines and cloud services, but we’ve also now enabled it to work fully with websites.

And to show off all these great Web capabilities, as well as some of the great improvements that we’re making inside Visual Studio, I’d like to invite Mads Kristensen on stage.

Mads Kristensen – Program Manager, Web Platforms & Tools team

Thanks. Hi, folks. I’m really excited to be here today to show you a few of the brand new features for Web developers in Visual Studio and Azure Websites.

So let me start by creating a brand new ASP.NET Web application. And as a new thing, as you can see, we made it really easy for you to provision both Azure Websites and virtual machines directly from this dialog.

You can even provision a new database directly from here, which lets you set up your entire development environment ahead of time. So now my project is created and Visual Studio is provisioning Azure, but it’s also now creating publishing scripts that I can use to automate my deployment.

And I’m going to open it here in the brand new PowerShell editor in Visual Studio. I can make any modification easily. And I can even use this in my continuous integration environment.

Now, let me switch to an existing website. So this is an ASP.NET application using AngularJS on the front end, and I’ve been building this with a few friends of mine. And if we just take a look here in the browser, we can see that this is called Clip Me. And it allows me to upload animated GIFs and have text burned into the animations automatically so it’s easy to share those images.

And we’ve been working really hard on this website. And my friends have asked me here, can you please just change the background color of the header of my website?

You know, I said, “Yes, of course I can do that.” So the thing is, I can’t really remember what is the style sheet that I need to find in Visual Studio and exactly where do I find the rule set that I need to change.

Normally, what I would do is that I would open the browser development tools here, and then I would make my modifications here. And when I’m happy, I would go back to Visual Studio and I would do the same thing all over again.

But now I can simply just change directly here in Visual Studio — in the browser, and as I make the changes, Visual Studio automatically syncs any change I do in the developer tools.

I’m just going to go with my favorite shade of blue here. Now, this is using a feature called Browser Link. And Browser Link is a two-way communication channel between Visual Studio and any Web browser.

So this works in Chrome, for instance, as we have right here. And notice how my favorite shade of blue has already been applied to the header. Because if I make changes in one browser’s developer tools, Visual Studio and Browser Link makes sure to stream that change into any other browser.

So now here in Chrome, I’m noticing that I actually made a typo here. I have repeated a word. So I’m going to use Browser Link again. Now, I’m going to put Chrome into design mode.

So now as I hover over any element in the browser, Visual Studio opens the exact source file and highlights my selection. Yes.

So now it’s as simple as simply just double clicking the thing that I want to change and as I make the change in the browser, Visual Studio just follows along. That is cool.

So this website is getting bigger and bigger, and I have a lot of JavaScript code in my project. Now, the problem with having a lot of JavaScript can sometimes be that to make sure that we’re following the best practices all the time.

So let’s just open here one of my AngularJS directives. And notice here that Visual Studio is now running JSHint directly inside Visual Studio. JSHint is a static code analysis tool that helps me catch common mistakes.

So in this case, I forgot a semicolon, that’s easy to fix. And here, I should use three equals signs instead of two. I always make this mistake. So here we go, I save the document, the errors go away, and I’m now pretty happy. All right? I think that I have a great website now, and I’m ready to publish.

So as Scott was mentioning, I can now take advantage of a new staging feature of Azure Websites. So I’m going to publish to my staging slot here. And we’re just going to hit the button.

So now Visual Studio is publishing just my changes. And what it’s done, it’s going to open the browser and if we just zoom in here real quick, we can see that we have the dash staging as part of the URL. So that makes it easy for me to find out that this is now my staging environment.

So now all my friends can now test the website in a staging environment. And when we’re all happy and make sure that everything works, we want to move this into production.

So to do that, I’m going to go to the Azure portal and I’m going to hit swap. So what happens is that my staging environment is being swapped for my production environment. But some of my configurations such as my SSL certificates and public domain names, they stay where they are.

And it only takes a few seconds, it’s already done. And now when we click the browse button, we’re now live in production.

So let’s actually just create a new meme here. I’m just going to drag in an image and give it some texture. So what’s happening is that when I upload the image, the website is passing the image processing off to a background task. Now, this is something that’s traditionally been a little bit problematic to do in a reliable and scalable way.

But I’m now able to take advantage of a new feature in Azure websites called Web Jobs. And this allows me to run background tasks in the same context as my website.

And you see, I already created one here called the Gif Generator. I can write a Web Job in any language that’s supported on Azure Websites. But let’s go take a look at my implementation here.

I’ve added a regular C# console application to my solution here. And I’m using the Web Job SDK, and that makes it really easy for me to listen in on any events that are happening in any of my Azure resources. And the rest of the implementation is just using regular .NET types and libraries.

So this is really easy to do. And in order for me to publish my Web Job, I need to associate it with my website. Now, I’ve already done that. And it’s as simple as going up to my Web application and just make the association here through the Gif Generator console. And now the next time I publish my website, the Web Job is being published with it.

ALSO READ:   This is What Happens When You Reply to Spam Email by James Veitch (Transcript)

Another cool feature that we get is we get a nice dashboard that shows me the invocation log. So here, every time my Web Job has run, sometimes maybe there’s a failure. And I can very easily go in, get the insights I need, see the input and the output, as well as a full call stack so I can very easily diagnose and fix the issue.

So now we’re in production. Obviously, this website is going to go viral. So I want to make sure that I get the best user experience to all you guys. So I’m going to use the Traffic Manager, a new thing in Azure Websites.

And I can set up Traffic Manager in three different ways. I can optimize for performance for round robin, or for failover.

So failover is the scenario where I select the primary node. And in case of any failures, Traffic Manager is automatically going to route traffic to my secondary nodes.

But since my code never fails, I’m going to optimize for performance. I’ve already set up Traffic Manager. So all I have to do here is to add my recently deployed website here into the Traffic Manager profile. And that ensures that no matter where you are in the world, you’re always going to hit the datacenter closest to you.

And now if you load the website up again, you can see we’re being served from West U.S. because we’re sitting right here in San Francisco.

So I just showed you how to easily create a development environment up front, use some of the new features of Visual Studio to create beautiful, modern, Web applications, deploy them to staging for testing, then on to production, and now scaling worldwide. Thank you very much.

Scott Guthrie – EVP, Microsoft Cloud and Enterprise Group

So as Mads showed, there are a lot of great features that we’re kind of unveiling this week. A lot of great announcements that go with it.

These include the general availability release of auto-scale support for websites, as well as the general availability release of our new Traffic Manager support for websites as well. As you saw there, we also have Web Job support, and one of the things that we didn’t get to demo which is also very cool is backup support so that automatically we can have both your content as well as your databases backed up when you run them in our Websites environment as well.

Lots of great improvements also coming in terms of from an offer perspective. One thing a lot of people have asked us for with Websites is the ability not only to use SSL, but to use SSL without having to pay for it. So one of the cool things that we’re adding with Websites and it goes live today is we’re including one IP address-based SSL certificate and five SNI-based SSL certificates at no additional cost to every Website instance.

Throughout the event here, you’re also going to hear a bunch of great sessions on some of the improvements we’re making to ASP.NET. In terms of from a Web framework perspective, we’ve got general availability release of ASP.NET MVC 5.1, Web API 2.1, Identity 2.0, as well as Web Pages 3.1 So a lot of great, new features to take advantage of.

As you saw Mads demo, a lot of great features inside Visual Studio including the ability every time you create an ASP.NET project now to automatically create an Azure Website as part of that flow. Remember, every Azure customer gets 10 free Azure Websites that you can use forever. So even if you’re not an MSDN customer, you can take advantage of that feature in order to set up a Web environment literally every time you create a new project. So pretty exciting stuff.

So that was one example of some of the PaaS capabilities that we have inside Azure. I’m going to move now into the mobile space and talk about some of the great improvements that we’re making there as well.

One of the great things about Azure is the fact that it makes it really easy for you to build back ends for your mobile applications and devices. And one of the cool things you can do now is you can develop those back ends with both .NET as well as NOJS, and you can use Visual Studio or any other text editor on any other operating system to actually deploy those applications into Azure.

And once they’re deployed, we make it really easy for you to go ahead and connect them to any type of device out there in the world.

Now, some of the great things you can do with this is take advantage of some of the features that we have, which provide very flexible data handling. So we have built-in support for Azure storage, as well as our SQL database, which is our PaaS database offering for relational databases, as well as take advantage of things like MongoDB and other popular NoSQL solutions.

We support the ability not only to reply to messages that come to us, but also to push messages to devices as well. One of the cool features that Mobile Services can take advantage of — and it’s also available as a stand-alone feature — is something we call notification hubs. And this basically allows you to send a single message to a notification hub and then broadcast it to, in some cases, devices that might be registered to it.

We also support with Mobile Services a variety of flexible authentication options. So when we first launched mobile services, we added support for things like Facebook login, Google ID, Twitter ID, as well as Microsoft Accounts.

One of the things we’re excited to demo here today is Active Directory support as well. So this enables you to build new applications that you can target, for example, your employees or partners, to enable them to sign in using the same enterprise credentials that they use in an on-premises Active Directory environment.

What’s great is we’re using standard OAuth tokens as part of that. So once you authenticate, you can take that token, you can use it to also provide authorization access to your own custom back-end logic or data stores that you host inside Azure.

We’re also making it really easy so that you can also take that same token and you can use it to access Office 365 APIs and be able to integrate that user’s data as well as functionality inside your application as well.

The beauty about all of this is it works with any device. So whether it’s a Windows device or an iOS device or an Android device, you can go ahead and take advantage of this capability.

What I’d like to do is invite Yavor on stage to show us how you can do it.

Yavor Georgiev – Program Manager on Azure Mobile Services

Thank you. Since we launched Mobile Services, we’ve seen some strong adoption and some great apps built on top of our platform, both across the consumer and enterprise space.

If you’re not familiar with it, Mobile Services lets you easily add a cloud-hosted back end to your mobile app, regardless of what client platform you’re using.

I’m here today to talk about an exciting new set of features that makes Mobile Services even more compelling, especially in the enterprise space.

Let’s start in Visual Studio. We’ve added a new project template that lets me build my mobile service right from within VS. What’s even cooler is I can now use any .NET language. And our framework is built on ASP.NET Web API, which means I can bring my existing skills, my existing code, and I can leverage the power of NuGet.

Now, we already have a project ready here that I created using the template. And you’ll notice it has a simple structure, contains only the things I need to know about.

I have a to-do item, and that’s going to be the model for my service. And the next thing I need is a table controller that lets me expose that model to the world in a way that all our cross-platform clients understand.

And then it wouldn’t be Mobile Services without great support for scheduled jobs. So I’ll go ahead and press F5. We finally addressed one of our top customer requests and added support for local development. We got a documentation stage right here with information about your API, and then we’ve even added a test client right inside the browser that lets you try it out.

I’ll go ahead and send a request. And I’ve hit a break point in my server code. Local and remote debugging now both work great with Mobile Services. Now, as expected, I get my result back in the browser. But we’ve all built to-do lists with Mobile Services. What I really wanted to build for you today is a powerful line-of-business app with the cloud.

So I was preparing here on the podium, and I noticed the mouse I was given is pretty broken. So I thought, why don’t I build a facilities app that I can use to report the issue? And then the facilities department can use that same app to take care of it. This is easy to do with Mobile Services.

The first thing I’ll do is I’ll add a class for my model. And let’s call it “facility request.” And by default, this is going to use entity framework code first, backed by a SQL database. However, as Scott mentioned, we support a variety of back-end choices including MongoDB and table storage.

The next step is to add a controller. We have first-class support for the Mobile Services Table Controller right here in the scaffolding dialog.

I can pick the model class I created, pick the context, and just press add, it’s that easy.

Now, this wouldn’t be a great enterprise app without great enterprise security. So let’s assume for a moment that my company has already federated our on-premises Active Directory with Azure.

Adding authentication to my API is as easy as adding attribute to my controller.

Now that we’re done with our service, let’s go ahead and publish. We’ve integrated Mobile Services in the same publishing experience that Mads demoed earlier. And I can pick an existing service or I can even create a new one right from VS.

Let’s pick one I’ve created previously. And when I publish, this will deploy my code to Mobile Services, which provides a first-class hosting environment from my APIs.

Now, let’s switch for a moment to the client app we’ve built. You’ll notice our app logic is abstracted away in a portable class library. And what that lets me do is easily reuse that code across a variety of client platforms.

We’re already taking advantage of the Mobile Services portal SDK, which gives me some easy data access methods, such as the one you see here that loads up all the facility requests from the server.

Now, what my app is actually missing is support for authentication. So let’s go ahead and do that.

I can take advantage of the Active Directory authentication library, which gives me a native, beautiful login experience on all my favorite clients. I can then pass the authentication token to the Mobile Services back end so then my user is logged into both places.

Let’s launch this in the simulator. The first thing it will ask me to do is log in with my company account.

And once I’m signed in, it’s going to go and call out to our on-premises Active Directory and start pulling graph information about my user. As you see here.

Now, we don’t have a facility request created yet, so let’s go ahead and do that. So it’s a broken mouse on stage. And then we can even take a photo. There it is. Missing a button. Let’s take that picture.

And when I press “accept” the facility request will get safely stored in the Mobile Services back end.

Now, we’ve added authentication with Active Directory, but what my app’s users will really want is integration with all their other great enterprise services, including SharePoint and Office 365.

For example, the facilities department might want to create a document on their SharePoint site for every request they receive. It’s easy to build that for them with Mobile Services.

Let’s go back to our service project and to the controller we added. And we’ll find the patch method.

This method gets called every time a facility request gets updated. So I can very easily take advantage of the Active Directory authentication token to call out to the great, new set of Office 365 REST APIs, as you see here. And they let me generate the document on the fly and post it straight to SharePoint.

Now that we’ve made our changes, let’s go ahead and publish. And while that’s going, we can go back to our app and now play the role of the facilities department. So we get the request, we’re going to take care of the broken mouse here. And then we’ll take an after picture. There you go. Much better.

When I press “accept” my request will go through to Mobile Services and that will call out to SharePoint and generate the document.

And we can verify that by heading over to our SharePoint site, my company SharePoint site here, and if I look inside the request folder, you’ll see there’s a new document generated just a few seconds ago using my company identity.

The document itself contains all the great information that t

Multi-Page