As a rush of cybercriminals, state-backed hackers, and scammers continue to flood the zone with digital attacks and aggressive campaigns worldwide, it’s no surprise that the maker of the ubiquitous Windows operating system is focused on security defense. Microsoft’s Patch Tuesday update releases frequently contain fixes for critical vulnerabilities, including those that are actively being exploited by attackers out in the world.

The company already has the requisite groups to hunt for weaknesses in its code (the “red team”), and develop mitigations (the “blue team”). But recently, that format evolved again to promote more collaboration and interdisciplinary work in the hopes of catching even more mistakes and flaws before things start to spiral. Know as Microsoft Offensive Research & Security Engineering, or Morse, the department combines the red team, blue team, and so-called green team, which focuses on finding flaws or taking weaknesses the red team has found and fixing them more systemically through changes to how things are done within an organization.

“People are convinced that you cannot move forward without investing in security,” says David Weston, Microsoft’s vice president of enterprise and operating system security who’s been at the company for 10 years. “I’ve been in security for a very long time. For most of my career, we were thought of as annoying. Now, if anything, leaders are coming to me and saying, ‘Dave, am I okay? Have we done everything we can?’ That’s been a significant change.”

Morse has been working to promote safe coding practices across Microsoft so fewer bugs end up in the company’s software in the first place. OneFuzz, an open-source Azure testing framework, allows Microsoft developers to be constantly, automatically pelting their code with all sorts of unusual use cases to ferret out flaws that wouldn’t be noticeable if the software is only being used exactly as intended.

The combined team has also been at the forefront of promoting the use of safer programming languages (like Rust) across the company. And they’ve also advocated embedding security analysis tools directly into the real software compiler used in the company’s production workflow. That change has been impactful, Weston says, because it means that developers aren’t doing hypothetical analysis in a simulated environment where some bugs might be overlooked at a step removed from real production.

The Morse team says that the shift toward proactive security has led to real progress. In a recent example, Morse members were vetting historic software—an important part of the group’s job, since so much of the Windows codebase was developed before these expanded security reviews. While examining how Microsoft had implemented Transport Layer Security 1.3, the foundational cryptographic protocol used across networks like the internet for secure communication, Morse discovered a remotely exploitable bug that could have allowed attackers to access targets’ devices.

As Mitch Adair, Microsoft’s principal security lead for Cloud Security, put it: “It would have been as bad as it gets. TLS is used to secure basically every single service product that Microsoft uses.”

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Nintendo Switch gamers rush to buy 90% off bundle and it’s perfect ‘if you love Fallout’ and John Wick

NINTENDO Switch players can get their hands on a game bundle for…

Your phone’s bars don’t actually tell you how strong your signal is – but there is a secret way to check it

THE BARS in the upper corner of your cell phone have long…

The FBI Has Made Over 100 Arrests Related to the Capitol Riot

In yet another week that felt like a month, the world continues…

Overwatch director says ‘it’s fair’ to lock heroes behind the Battle Pass

OVERWATCH 2 is right around the corner, and big changes are coming…