Mads Torgersen and Dustin Campbell on the future of C#

How has open source changed it - and can it survive Windows PC decline?

Exclusive interview At Xamarin's Evolve conference in Orlando, at the end of April 2016, I had a rare opportunity to sit down with Mads Torgersen and Dustin Campbell to discuss the future of the C# programming language.

Torgersen is the Program Manager for the C# Language. He maintains the language specification and runs the language design process for C#. Dustin Campbell is a Program Manager on the Visual Studio team.

This is a moment of change for Microsoft's development tools, as the company transitions from focusing entirely on Windows, to creating cross-platform tools that it hopes will push developers towards its Azure cloud services, either as a back-end for mobile applications, or as a deployment platform for server applications irrespective of the operating system.

In pursuit of this goal, Microsoft forked its .NET development platform in November 2014, creating the open source .NET Core project which runs on Windows, Mac and Linux. Earlier this year, the company also acquired Xamarin, enabling C# and .NET developers to target Android, iOS and Mac as well as Windows.

Alongside these initiatives, Microsoft has continued to evolve the C# language. The forthcoming C# 7.0, which is expected to ship with the next version of Visual Studio, now in preview, includes several new features. One is the ability to return multiple values from functions. Functions can return Tuples, which are ordered lists of elements. Another is an enhanced switch statement that lets you test for types as well as values, though more advanced pattern matching (as found in functional languages) has been deferred to a future version.

Other new features include local functions - that is, functions nested within other functions, with access to local variables - and the ability for functions to return values by reference (ref), accompanying the existing ability to pass in ref parameters.

How has making C# open source impacted the development of the C# language?

Torgersen: It’s had a huge impact. We’ve essentially open sourced the language design. Everything is out there to be scrutinized and that means we get an immediate feedback loop that we didn’t have before. In the old Microsoft we had a focus on doing cool things in secret and then revealing them unto the world. The risk is that you get very far down the wrong path and then it’s very expensive or potentially too late to back pedal. We get a lot of micro-adjustments out of that effort.

Does the process slow you down, as the Java Community Process has for Java? Or does the fact that Microsoft still decides what goes in avoid this?

Torgersen: Yes, that's the key difference. We haven’t changed the decision process much. It’s still a bunch of people that get together in a room, or if necessary take a vote, or call up Anders [Hejsberg] and say what should we do, you’re the arbiter. That means we are not impeded by a cumbersome decision process. We don’t have to appease everyone.

If you look at the history of C#, there have been major innovations like LINQ, and the Roslyn compiler, as well as smaller changes. Will there be further big changes, or is C# now in a more incremental phase?

Torgersen: I would say the last major thing we did in C# was Async, which has been massively successful, to the point of many other languages copying it now. That was in response to a big need. Like, you really need to deal with this now, or all code forever will look like spaghetti. There isn’t yet a clearly defined thing out there that has the same kind of urgency. So right now we are in a more incremental mode, we are releasing C# versions faster, with smaller features.

Campbell: The introduction of Roslyn, being a big thing, made it much easier to introduce those smaller pieces as well. It’s much lower cost for us to go and make those little changes that make C# a little less cumbersome in some of the corners of the language. Roslyn enables that.

Someone said to me, C# is nearly perfect but null pointers are a disaster. Have you thought about a switch that would eliminate that from the language?

Torgersen: This is something that Miguel [de Icaza] feels strongly about and so do we. We actually do have a sketch of a feature that we want to do, it won’t be in the next version, but we are thinking the one after. Null pointers are there to stay, they are an integral part of the semantics of the language, but we should let you state your intent. Is this this return type, or field type, or whatever, is this supposed to be null or not?

So something like, put a question mark on the type if this thing is supposed to be null, and then start assuming that if you didn’t put the question mark it’s not intended to be null. We know its default value is going to be null, so we will check your constructors, do they make sure to overwrite the null before the object is constructed, we’ll check that you don’t assign nulls into it. And if you do put the question mark on and say yes, I will have the nulls, then we will start impeding you from de-referencing. We will say, you can’t dot [ie. de-reference] until you’ve proven to us that this is not null.

We have to do it in a way that we recognise the ways that people are already checking for null. Then if we see, oh you did that up there, we do a flow analysis and say OK, we trust you, you checked, you can go ahead and dot. And by the way, if we can’t figure it out, you can still tell us that you really want to do it.

So it is very much on our radar. It is one of those things we regret. The computer science grand old man, Tony Hoare, who actually worked for Microsoft Research for many years and still comes in on Thursdays, he gives a talk about his billion dollar mistake, how he invented the null pointer and wants to apologise.

Similar topics

Broader topics

Other stories you might like

  • North Korea pulled in $400m in cryptocurrency heists last year – report

    Plus: FIFA 22 players lose their identity and Texas gets phony QR codes

    In brief Thieves operating for the North Korean government made off with almost $400m in digicash last year in a concerted attack to steal and launder as much currency as they could.

    A report from blockchain biz Chainalysis found that attackers were going after investment houses and currency exchanges in a bid to purloin funds and send them back to the Glorious Leader's coffers. They then use mixing software to make masses of micropayments to new wallets, before consolidating them all again into a new account and moving the funds.

    Bitcoin used to be a top target but Ether is now the most stolen currency, say the researchers, accounting for 58 per cent of the funds filched. Bitcoin accounted for just 20 per cent, a fall of more than 50 per cent since 2019 - although part of the reason might be that they are now so valuable people are taking more care with them.

    Continue reading
  • Tesla Full Self-Driving videos prompt California's DMV to rethink policy on accidents

    Plus: AI systems can identify different chess players by their moves and more

    In brief California’s Department of Motor Vehicles said it’s “revisiting” its opinion of whether Tesla’s so-called Full Self-Driving feature needs more oversight after a series of videos demonstrate how the technology can be dangerous.

    “Recent software updates, videos showing dangerous use of that technology, open investigations by the National Highway Traffic Safety Administration, and the opinions of other experts in this space,” have made the DMV think twice about Tesla, according to a letter sent to California’s Senator Lena Gonzalez (D-Long Beach), chair of the Senate’s transportation committee, and first reported by the LA Times.

    Tesla isn’t required to report the number of crashes to California’s DMV unlike other self-driving car companies like Waymo or Cruise because it operates at lower levels of autonomy and requires human supervision. But that may change after videos like drivers having to take over to avoid accidentally swerving into pedestrians crossing the road or failing to detect a truck in the middle of the road continue circulating.

    Continue reading
  • Alien life on Super-Earth can survive longer than us due to long-lasting protection from cosmic rays

    Laser experiments show their magnetic fields shielding their surfaces from radiation last longer

    Life on Super-Earths may have more time to develop and evolve, thanks to their long-lasting magnetic fields protecting them against harmful cosmic rays, according to new research published in Science.

    Space is a hazardous environment. Streams of charged particles traveling at very close to the speed of light, ejected from stars and distant galaxies, bombard planets. The intense radiation can strip atmospheres and cause oceans on planetary surfaces to dry up over time, leaving them arid and incapable of supporting habitable life. Cosmic rays, however, are deflected away from Earth, however, since it’s shielded by its magnetic field.

    Now, a team of researchers led by the Lawrence Livermore National Laboratory (LLNL) believe that Super-Earths - planets that are more massive than Earth but less than Neptune - may have magnetic fields too. Their defensive bubbles, in fact, are estimated to stay intact for longer than the one around Earth, meaning life on their surfaces will have more time to develop and survive.

    Continue reading

Biting the hand that feeds IT © 1998–2022