Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Microsoft YARP (github.com/microsoft)
222 points by gokhan on Feb 20, 2022 | hide | past | favorite | 141 comments


I'm glad to see Microsoft supporting more low-level magic like this. I've written my own version of ReverseProxyMiddleware more times than I can recall... The most painful parts were always around translating between HttpClient and HttpContext. Looks like Microsoft abstracted this exact concern away under the IHttpForwarder and ForwarderHttpClientContext types.

Looking at the docs around this, I'd probably start w/ Direct Forwarding so I have more control over how things route:

https://microsoft.github.io/reverse-proxy/articles/direct-fo...


>> We found a bunch of internal teams at Microsoft who were either building a reverse proxy for their service or had been asking about APIs and tech for building one

I'm really curious, does Microsoft have a crew of anthropologists walking around to figure out which team is reinventing wheels?


Honestly I think any company that exceeds a certain size could do with a full time team to investigate and solve NIH and duplication.

I mean Google reinvents chat every couple of years, and that's just the ones that come out in public <_<


Fundamentally I believe it’s a communication problem + a problem of “engineers gonna engineer” to launch the next Big Win™ for their team / resume.

If I were looking for an honest solution for the first part, I might start with a standardized way of classifying technical problems and solitons. Maybe then it would be possible to build a directory of sorts for existing solutions that could be referenced when the need for a solution arises anywhere within the company.


Typically platform teams like ours (.NET person here) gets to see what a wide variety of teams are doing or are trying to do. YARP was a case of that happening and it felt like a good opportunity to build something low level and extensible that could help other teams at Microsoft, the OSS community and could directly drive improvements into the .NET networking stack.


I did worked with MSFT for some time. Basically you can do your own stuff, until you need to go to production. And if you do all things right "per process" somebody will "glimps" into your architecture and maybe even code. And in most cases this particular individual has seen a lot and is in the position to identify patterns. TBH I quite enjoyed how their corp machinery is working in the cloud world.


> I'm really curious, does Microsoft have a crew of anthropologists

Quite literally, yes. Microsoft research has (or at least had) anthropologists on crew.


That's actually cool! I feel it's great they do some introspective work, no matter the company size.


Microsoft employees here. No actually. It's probably via word of mouth that they realized people are reinventing the wheel.


You know how barnacles attach to a whale, to clean it? Maybe you guys need to spin off a whale-cleaner startup =D [edit] I realized it's the fish that clean the barnacles off it. So maybe like that. Anyway I'm just saying that an organization that is carrying that much dead weight just on internal organization is probably desperate for some better method than word of mouth to identify stuff like that.


One thing I'd like to add as a potential differentiator as well is that YARP runs very well on Windows and because it's build on ASP.NET Core, can run inside of IIS and directly on HTTP.sys as well (which means we can take advantage of cool features like http.sys request delegation where possible https://github.com/microsoft/reverse-proxy/commit/b9c13dbde9...). This means you get platform portability AND deep platform integration for free.


YARP is yet a another example of how C# is becoming a dominant systems language.

The ease with which you can build a tailor-made reverse proxy is pretty amazing.


>C# is becoming a dominant systems language

In 2022 everything is a system programming language. Next on that list is Javascript, I am sure.


>In 2022 everything is a system programming language. Next on that list is Javascript, I am sure.

Yes but cool people will write their systems software in Typescript. :)


JS wasn't sufficient, there, so we chose Go (Golang)


> yet a another example of how C# is becoming a dominant systems language

Perhaps a bit of a stretch. I am currently choose golang for a more systems like project because of startup time and gc pauses in c#/dotnet. Definitely nice to see nativeaot which should address the startup time issues.


It is a stretch but a viable and reasonable contender. C# spans a wide range ... Which is a weakness but also it's greatest strength.


Maybe a slight stretch to say that it is a dominant systems language, but not much of one to say that it is becoming one.

Microsoft made a huge organizational change to support their cross-platform initiatives.


I guess I have to ask: What exactly is the use case for a tailor made reverse proxy? What customizations do people typically want to offload to the reverse proxy?


The main use case that comes to my mind is when you want to deploy a proxy in an environment where the developer needs full oversight of the proxy internals and doesn’t want to learn the particulars of configuring any given proxy software suite.

Additionally, YARP is cross-platform capable, so you can run the same proxy on Linux, macOS, and Windows. NGINX doesn’t even provide that, for example.

In terms of customizations, YARP is more like “what feature do you need to build in” than “how do I need to configure”. You can quickly build a simple reverse proxy and just about as easily setup a full load balancing solution, but the simple use case doesn’t need to include any of the code for the advanced use case.


> The main use case that comes to my mind is when you want to deploy a proxy in an environment where the developer needs full oversight of the proxy internals and doesn’t want to learn the particulars of configuring any given proxy software suite.

Yes, when does this happen? That's what I'm trying to understand. You've provided some possible feature benefits to a DIY solution but you're really just trading NGINX internals for YARP internals here. At first glance, to my eye, one of those is going to have better community support and documentation.

> Additionally, YARP is cross-platform capable, so you can run the same proxy on Linux, macOS, and Windows. NGINX doesn’t even provide that, for example.

Yes it does? You might need to elaborate because this is just incorrect but I suspect you're trying to make a different point.

> In terms of customizations, YARP is more like “what feature do you need to build in” than “how do I need to configure”. You can quickly build a simple reverse proxy and just about as easily setup a full load balancing solution, but the simple use case doesn’t need to include any of the code for the advanced use case.

My main issue with this idea is that the average consumer of this is going to be better positioned to handle the engineering hurdles of writing their own advanced use cases.


YARP was not a self motivated product. Engineering groups within Microsoft asked the .NET group to help them with their custom (.NET) reverse proxies. And the .NET group added it to their portfolio.

So you are right, it is a very specialized situation, but the key stakeholders who asked, contributed and collaborated with the .NET group are exactly these engineers who previously did a DIY solution.


> YARP was not a self motivated product. Engineering groups within Microsoft asked the .NET group to help them with their custom (.NET) reverse proxies. And the .NET group added it to their portfolio.

This is an interesting take that I think is perhaps naive. Microsoft has a vested interest in pushing .NET as a development platform and can push teams to use the company's flagship programming stack in everything it does. From the perspective of given individual teams within the company this may not appear to be self-motivated but how could it not be, given the driving requirements to use MS-approved technology stacks?

What your comment misses the mark on is what the driving force behind those engineering decisions is. If the decision is driven by corporate guidance to use .NET or else (this is a hypothetical), it's not exactly because the engineers themselves wanted to write something from scratch in .NET to solve a particular well-solved problem.


I think they are quite pragmatic. For sure they will prefer a language which they trust and can influence (see here) for green field assets but that is just one of many considerations.

Office for example took React Native as a UI toolkit and invested in that instead of .NET. Typescript was invented to help them. The Windows division uses C++ based COM / WinRT and built a independent framework.

Microsoft is not a pure .NET shop. There are not even a anti-Java shop


> I think they are quite pragmatic. For sure they will prefer a language which they trust and can influence (see here) for green field assets but that is just one of many considerations.

While this is no doubt true the implied context of my reply was to the idea that Microsoft's engineers were selecting the best available option, the terms of "best" insinuating "best for anyone" rather than "best for software developer at Microsoft" -- a key difference.

> Office for example took React Native as a UI toolkit and invested in that instead of .NET.

Only after many years of trying to make WPF work and seeing the tide go another way.

> Microsoft is not a pure .NET shop. There are not even a anti-Java shop

For sure, C# and J# were basically the replacements for the failed attempted co-opting of Java that was J++. As far as I can tell their only Java related product is their own build of OpenJDK designed to be used on Azure. It's unclear what contributions they've really made in the space.

While they may not be a "pure .NET shop", I'd bet they're the closest thing out there.

I'm not suggesting Microsoft 100% ignores broader trends but this is sort of an aside from the crux of the original commentary.


> the terms of "best" insinuating "best for anyone" rather than "best for software developer at Microsoft" -- a key difference.

As Ballmer used to say: “developers, developers, developers”. Microsoft has long known that what is best for developers is what is best for Microsoft.


> Yes, when does this happen?

In highly regulated or extremely sensitive industries.


> Additionally, YARP is cross-platform capable, so you can run the same proxy on Linux, macOS, and Windows. NGINX doesn’t even provide that, for example

nginx runs on all three of those platforms. If you mean the binary, this is surely misleading as you just move the burden from getting your platform specific binary to your platform specific .net runtime.


NGINX is also more portable. It runs on more architectures and operating systems than .NET/C#.

I can literally throw a stick and hit several devices in my office that NGINX runs on but C# does not.


Does nginx have Tizen support?


> Only the select() and poll() (1.15.9) connection processing methods are currently used, so high performance and scalability should not be expected.

https://nginx.org/en/docs/windows.html


A common one are multi-tenant applications where you want to dynamically (depending on data in a database for example) route requests to different places (like docker containers).

The first specific example I can think of is JupyterHub - it handles user management and starting/stopping Jupyter Notebook servers, then provides a reverse proxy for users to access their notebook servers directly.


We have used this on a net core app when yarp was only a very basic half abandoned solution.

Basically, we needed to to auth on each request (can the user see this per a slightly complicated rights system) and we routed to automatically spin up containers,so discovery was also custom


We have one at work. This came from a distrust of developers’ abilities to correctly configure Apache and NGINX and a desire for the ability to roll out company wide changes that affect reverse proxying or headers we always want to set or remove on each request cycle.


That's interesting that the time spent developing your own solution was viewed as simpler and a better investment than using existing solutions out there (even commercial ones).

If you can speak to it, when it was rolled out were there numerous issues related to clients misbehaving? IME projects like this are "easy" to get out the door as it were, but usually wind up causing a load of support headaches due to massive implementation gaps.


Custom load balancing rules could be an obvious usecase.


>YARP is yet a another example of how C# is becoming a dominant systems language.

It won't become a systems language until it will let you to directly access hardware and do memory management, it will have improved performance and it will be natively AOT compiled.


It already is a systems language and has been since the outset. Managed code is the preference, but you can certainly run unmanaged code with direct memory access if you want to.

https://docs.microsoft.com/en-us/dotnet/csharp/language-refe...

https://github.com/MichalStrehovsky/zerosharp


I don't know if 22k lines of C# and 100k lines of json is an easy project.


I am late to the party here.

What is the purpose of a reverse proxy?

Reading online: security, load balancing, https seems to be mentioned.

It appears a bit to me that a reverse proxy is not doing what a web server used to do back in the days of IIS/Apache.

I see a lot of programs meant to operate on the web that use very simple http servers.


I think of a reverse proxy as more of a general architectural pattern - when you have cross-cutting concerns for an application with a relatively broad protocol (HTTP) you can implement them as a networking layer in front of the service(s). Indirection and abstraction are powerful tools in computing.

Common reasons for such a split:

1. Composing several different services into a single internet domain

2. Putting stateless or CPU-heavy workloads onto different infrastructure from long-lived, IO-intensive tasks

3. Variation of the above, allow for multiple server processes in server environments which can't scale up to single multi-threaded processes well (CRuby/CPython)

4. Split security concerns off onto a box which has a much smaller attack surface, such as a box meant to be deployed in a DMZ (examples would be HTTPS termination, authentication and authorization policy enforcement)

5. Caching of responses (this is Varnish's claim to fame)

6. Variation of the above, some CDNs work by hitting an otherwise private server for data, then replicating throughout the network for locality to end-users.

7. Application-specific behavior patching, such as adding new API endpoints or changing the behavior of existing ones

In many of these cases, people use something like NGINX, Apache or a commercial solution like an F5. In other cases (such as application-specific business logic) people will write their own code, perhaps in the future using this Microsoft project as a starting point.


to add to your list:

8) avoid cross-origin problems and prevent unnecessary OPTIONS preflights that result in uneeded roundtrips by bundling mutiple FEs+BEs into a single origin with different paths (as opposed to mutliple domains/origins).


Have a look at Traefik, Envoy, or Istio proxy. Load balancing, TLS termination, authorization and authentication would be most common use cases.


How does this compare to nginx? Does it support websockets? I have an asp.net core websocket based app that need to support hundred of thousands concurrent websockets connection. I ll be taking a look at this because I don't wanna expose Kestrel directly to the internet.


Used it several weeks ago at work in an app which actively uses WebSockets. It just works.


The main difference I see is that YARP is added right into the dotnet project itself. See the getting started :

https://microsoft.github.io/reverse-proxy/articles/getting-s...

From my understanding Kestrel passes request to this.

> The proxy server is implemented a plugin component for ASP.NET Core applications. ASP.NET Core servers like Kestrel provide the front end for the proxy by listening for http requests and then passing them to the proxy for paths that the proxy has registered.


I see. So it can operate in-process? Could in theory be much faster than nginx out-of-process model.


Why would someone need an in-process reverse proxy? I don't see what an in-process proxy would add that a request routing engine couldn't do. Or is the point that you can now add your reverse proxy configuration as just another project within the same solution?


And you could dynamically and efficiently add/remove web apps and endpoints assuming your entire codebase is in C#.


You can also block the clients for a brief period while you make routing changes, or wait for any other arbitrary event to complete. This is one way to achieve zero-downtime upgrades, even in situations where the service is running as a single binary on a single host.


We use it to proxy request from "modern" part of application moved to .net core to legacy parts of application, that are yet to be converted.


This is a "premiere use case" we've seen people use YARP for.


I was assuming more the latter. Also benefit is you can extend and customize in C#.


You might need to deploy a single executable rather than multiple.


I think it either operates as a traditional (heavily customizable) reverse proxy OR you have a app and forwards x% of your API surface to second tier services.

Imho: no in-procees reverse proxying. That should not make sense.


So, basically vendor lock-in is the difference?


Extensibility via C# for custom rules. It's not as attractive if you have nothing to customize but we've found lots of developers want to write code to influence the proxying. If you're a .NET developer or you don't like writing LUA (what nginx uses) then you can use YARP.


It supports WebSockets, HTTP/2 (including gRPC) and HTTP/3 (with .NET 6+).


They should’ve called it SNAP: Somebody Needs A Promotion


It's a reverse proxy so it's "pray", not exactly confidence inspiring! :-)


What are the odds that there's an AWS employee that will backronym this as part of their L7 promo doc for 2023?


This is a good one. Will remember it for the next project


"...narp?"


No luck caching them proxies then?


Just the one actually


Not Another Reverse Proxy!


Worth noting that YARP (or as someone has eloquently described it as PRAY) here was another Microsoft project which basically killed a previous open source project which did the same thing:

https://github.com/proxykit/ProxyKit


We know there is an ongoing issue with MS's relationship with .NET OSS ecosystem, but in this case I am totally fine with this. I proved the need of a code-first, highly extensible/customisable reverse-proxy (vs other approaches with config only and/or scripting). I also discovered on my journey there where things that needed to happen in layers I couldn't control nor influence (e.g. HttpClient, Kestrel etc) that YARP team could get done. I expressed these things when I had a meet with the team.

Ultimately we got a solid product with a not-too-dissimilar API and I have one less thing to maintain :)


MIT license, nice.


I think it was initially Apache licinse but they re-licensed to MIT in order to submit it to CNCF.


Reminds me of Hot Fuzz (2007): https://youtu.be/qR8c-pfMRqA?t=61


Yet Another Reverse Proxy.

Cute.


Sometimes it feels like Microsoft and/or C#-enjoyers just want to re-implement everything that already exists. While having many implementations to choose from if you need to select a solution to an outstanding problem is great, it makes me wonder why re-implementing something with no discernible benefit keeps eating time/resources all over the place.

In similar cases of implementing reverse proxies in C, C++, Rust, Go and Java, there were generally benefits in various shapes and sizes like different security models, performance vs. capability trade-offs and integrated vs. specialised solutions. In this case, it just seems to be a "we don't want the stuff we didn't build ourselves" which is a shame considering the benefits of pooling resources on existing implementations would have.


To clarify, are you saying that:

1. There are already good reverse proxy libraries in C#?

2. There are already good reverse proxy libraries in other languages?

3. There are already good reverse proxy applications?

If it's #1, then yeah, it would be nice if the ReadMe explained why YARP is different. To be fair, they're at least consolidating a bunch of efforts within Microsoft:

> We found a bunch of internal teams at Microsoft who were either building a reverse proxy for their service or had been asking about APIs and tech for building one, so we decided to get them all together to work on a common solution, this project.

If it's #2, then what do C# developers do? Having to build your proxy in another language isn't a deal-breaker, but there are advantages to keeping your project/team/company on a single language.

(And Microsoft does contribute to Envoy [1][2].)

If it's #3, that only works up to a point. My old team used Nginx, which was perfect in the beginning. But over time we needed to customize things in ways that were difficult to do within Nginx's architecture (either in Lua or in C), leading to hackier and hackier solutions. Using a proxy library to build exactly what you need can make things much simpler.

[1] https://blog.envoyproxy.io/general-availability-of-envoy-on-...

[2] https://techcrunch.com/2020/08/05/microsoft-launches-open-se...


> But over time we needed to customize things in ways that were difficult to do within Nginx's architecture (either in Lua or in C), leading to hackier and hackier solutions. Using a proxy library to build exactly what you need can make things much simpler.

This is the central argument for me regarding why we don't "just use nginx". We actually do use it for some corporate websites, but not for our main product.

There is a ton of power hiding here... With a well-supported first-party RP framework, how long would it take for a determined developer to replicate the most important bits of the nginx feature set? Imagine being able to throw a blazor dashboard on top of this stuff. Wiring up real-time events/metrics would be super trivial if you are even remotely interested in learning how the DI system operates. Mix in a little bit of SQLite and you could have something I may argue provides a better UX than what nginx offers today.


> But over time we needed to customize things in ways that were difficult to do within Nginx's architecture (either in Lua or in C),

Yup. Eventually a configuration language becomes a shitty programming language, so why don't we just use a programming language?



Can you share more specifics on what you needed to customize that went beyond nginx's capabilities?


This was 5+ years ago so I'm a little hazy on the details. A lot of it was just plugging into our infrastructure for config, service discovery, rate limiting, logging, etc.

- Logging the exact metrics we wanted to our metrics service. (We wrote a bunch of Lua code to do this.)

- Loading the set of backend services from Zookeeper. (We had a sidecar process that would periodically sync from Zookeeper to a config file, then call `nginx reload`.)

- Routing requests to different datacenters based on where the user's data was homed. (We had Go libraries to do this, and it would have been nice to be able to just call into that. I think we ended up putting this functionality in the sidecar.)

- Changing the way we streamed a response based on an HTTP header from the application. (We ended up writing a C module for this.)

- Changing the backend selection algorithm from "least connections" to "best of 3 random choices". Nginx added support for this in 2018, apparently.

There were ton of things we solved with Lua, which wasn't ideal. Since that's the only Lua in our codebase, everyone who touches it has to make a mental switch to remember all the weird edge cases (as you do with any language), plus 1-based array indexes, plus no static types.

Plus, any Lua or C you write has be contorted to fit Nginx's multi-stage request/response architecture. If you're writing a plugin that is meant to play nice with other plugins, then the contortions may be worth it. But for us, a proxy library would have given us the freedom to write code that was simpler and easier to understand.

The team moved to Envoy after I left: https://dropbox.tech/infrastructure/how-we-migrated-dropbox-...


Your application does indeed sound like it's beyond OTS solutions, thanks for sharing!


There aren't a lot of good proxy development frameworks. There are decent lower level libraries, and proxy servers with limited scriptability.

I've spent the last 4 years at Fly.io wishing for a nice, full featured, proxy framework. This looks pretty darn good to me.


I find hyper quite nice to work with to build reverse proxies, mostly thanks to the HTTP client and server having common types for request, response, headers and body streams etc. I agree that it is only a decent lower level library rather than a full featured proxy framework though.


A Ruby on rails shop like fly.io would use .net?


Wait, fly.io is a Rails shop?

(At any rate, they employ Chris McCord, the creator of Phoenix, and have spent considerable resources pushing Phoenix. So it's clear that they aren't beholden to any one language or ecosystem.)


I think we have more Rust and Go than Ruby at this point. Probably slightly less Elixir.

For better or worse we're not really fixated on one language.


He explains the use case why you need a RP framework in your language of choice not his desire at fly.io (I guess :))


The Go standard library?


> re-implementing something with no discernible benefit keeps eating time/resources all over the place.

> is a shame considering the benefits of pooling resources on existing implementations would have.

What exactly would you prefer these developers spend their time on instead of this work?

There are a lot of benefits to being able to integrate your own reverse proxy directly into your software stack. C# developers enjoy these as much as anyone else would.


> There are a lot of benefits to being able to integrate your own reverse proxy directly into your software stack. C# developers enjoy these as much as anyone else would.

I take this at face value but it would be nice to know what the benefits are vs nginx and other off the shelf solutions.

The readme explains the benefit for Microsoft but not how it stacks against off the self solutions.


> it would be nice to know what the benefits are vs nginx and other off the shelf solutions.

Biggest difference in my view is being able to make synchronous blocking calls directly into any arbitrary business code in order to determine the fate of a request. This can include rule/policy engines as well as auditing & tracing frameworks.


Lua in nginx can you get very far in my experience.


Lua vs C# - chose your poison. There's a ton of AspNet Core libraries for handling http things e.g. JWT handling. Things that might need the licensed version of nginx.

Also it's embeddable and self-hostable into any .NET / AspNet Core application.


One benefit is that you can run it on your existing IIS cluster where you already run yor stuff.


> The readme explains the benefit for Microsoft but not how it stacks against off the self solutions.

Totally agree. Seems to me that the main benefit is that it's easy to extend in .NET, vs. C/C++/Lua/WASM.


>Sometimes it feels like Microsoft and/or C#-enjoyers just want to re-implement everything that already exists

It sometimes feels to me, like a huge percentage of developers reinvent the wheel


There are thousands of Java accounting apps and thousands of shitty medical billing websites.

I think most devs are definitely reinventing the wheel for the ten thousandth time


And for good reasons. I think there is an almost ignorant annoyance with devs who reinvent, but a lot of times they’re reinventing a punctured wheel or the wheel is for a bulldozer and they need one for a minivan.

I had this exact experience: “why should I reinvent the wheel?” Oh this solution doesn’t let you apply partial payments or apply one payment to multiple invoices. This one doesn’t create account statements. This one can’t fax invoices. This one only supports Stripe (and has its own undocumented billing API because who would use anything other than Stripe?). This one has a weird XML definition system for subscription pricing which makes it difficult if not impossible to handle negotiated rates on a per-account basis. Not to mention having to teach billing people how to write the weird XML. This one won’t prorate invoices. This one is a cloud service that costs $100+ a month and is liable to be unavailable when I really need it! They all have weird features that I will never use.

Ok, we’ll go with this imperfect one and make some changes. But where’s the entry point? How does weird framework in weird language even work? What are the conventions? How is the original developer differing from those conventions?

And then you come to the realization that the wheel gets reinvented because the existing wheels were made with specific business logic in mind. While this can often be generalized for a particular industry or set of use cases, it often does not and can not cover all use cases. The wheel is reinvented.

(Not so fast with the XKCD comic about new standards. We’re not imposing standards on others, we just need our software to work for us.)

Sometimes the wheel doesn’t need to be reinvented. Some help desk software I wanted to use needed some slight tweaks. The entry points were obvious and the development environment was easy to setup. I made the tweaks and was good to go.


>There are thousands of Java accounting apps

There is already a solution that lets us not reinvent this particular wheel. It's called Excel.

I hate using it.


Ah, that is why cars still use chariot wheels.


And how would we cope with this in, say, 100 years, when the list of required functionalities is much larger, and a new language comes along?


Seems like the fact that its built in C# isn't really the main point. The main point it that it runs on asp.net infrastructure


Would your reaction have been different if, instead of Microsoft, this article was submitted by some random person entitled "Show HN: I built a reverse proxy in C#"?

By your logic, the world should be operating on a single OS, with a single web browser, paired with a single web server, because why re-implement something that already exists?


Pretty much every "I built a thing" post is met with the same response. MS isnt special.

I think pretty much every open source project that wants to be taken seriously needs to compare itself with its competitors.


From what I can tell they built a tool for themselves to use internally, and open-sourced it. I don't think they were looking for approval or to be taken seriously.

Similarly, Igor built nginx to solve a particular problem, and I think it was a few years before it went open-source.

I can't imagine most OSS developers have delusions of grandeur.


Given that pretty much all of the recent Microsoft's open-source projects under the .NET umbrella, including ASP.NET Core, have been a resounding success in terms of communiry adoption, I don't think they're very shy about comparisons.

I expect some of the comments under this post to age very badly.


Nope, if it is an HN hyped language it gets all the love of the day, Ruby, Clojure, Rust,....


Rust rewrites get a bit more love by default because it's implicitly assumed it'll be fast and bug free compared to alternatives.


I doubt Rewrote it in Ada/SPARK would get the same love.



It's only in the last 5 years that .NET was open sourced and made available officially for Linux systems. Up until then most C# use was strictly on Windows, and OSS especially for Windows server side stuff (IIS, SQL server, etc.) was (and still is) pretty rare. I've seen windows server places (Microsoft) implement everything from redis, memcached, queue systems, prometheus, lucene, DNS servers, to reverse proxies, etc. entirely from scratch out of necessity. Years and years of man hours going into building and supporting something that would be a simple debian package install on Linux.

I have no insight into it but my gut tells me windows server exclusive shops are probably dwindling and have long ago realized they needed to start moving to Linux, especially now that .NET is officially supported there. I think the days of reinventing everything in C#/.NET/windows are coming to an end fast.


C#/.NET/CLR are competing against Java/JRE/JVM.

So reinventing "everything" in a Windows Server environment is slowly coming to an end, with Windows server becoming a hypervisor for VMs/containers and a runtime for .NET/CLR.

The economics of getting all of the FOSS infrastructure tooling in a "Windows native" form is being beaten by just running them in containers/VMs on whatever hypervisor is available.

In the same way, Linux is becoming a hypervisor for containers and the various container runtimes.


Linux is being replaced by type 1 hypervisors, no need for Linux to run a container runtime.


And that is not a gut prediction but history. 15 years ago the default was Windows, .NET and SQL Server. 5 years ago it is Linux in the cloud, .NET and PostgreSQL.


[flagged]


https://devblogs.microsoft.com/dotnet/announcing-yarp-1-0-re...

https://aka.ms/aspnet/benchmarks (click on the "1 of 21" pagination at the bottom of the page and select "Proxies"). You can compare it to nginx, envoy, and haproxy.


So this thing is 2x faster than Envoy, something wrong in the benchmark probably.


mdasen has already pointed out that "2x" seems to overstate the difference by quite a bit. YARP also has a spikier max latency, which I'd expect for a GC'ed language.

I don't see a way to choose the benchmarking platform in the PowerBI dashboard, so I assume all these numbers were collected on Windows. In that case, it doesn't surprise me that YARP is faster: Envoy uses libevent for cross-platform I/O, but libevent is relatively slow on Windows. It exposes Windows-native I/O completion ports as a BSD-style socket API, and the implementation is both inherently somewhat slow and pushes buffering concerns into the Envoy code. Kestrel, YARP's underlying webserver, uses libuv instead. libuv is more actively developed (because of Node.js) and takes the opposite approach, exposing an IOCP-like API even on POSIX systems. Basically, YARP's I/O model is much closer to Windows' native model. This Envoy issue[0] is really informative if you're interested in the details.

More broadly, the .NET team that builds the Kestrel webserver and contributes to YARP and gRPC is full of performance heavyweights. I'd start by assuming that the benchmarks for their projects are thoughtfully designed. Everyone makes mistakes, but start by assuming that James Newton-King, David Fowler, and their peers are brilliant engineers leading a talented team - because they are.

I say all this as someone who doesn't particularly love Windows as a development environment, .NET as a language, or Microsoft as a company (quit after just 6 months). Credit where credit's due.

[0]: https://github.com/envoyproxy/envoy/issues/4952#issuecomment...

Edit: moogly points out that my info on the Kestrel implementation is out of date. Mea culpa!


> Kestrel, YARP's underlying webserver, uses libuv instead

Kestrel (as used in ASP.NET) uses a Socket-based transport since .NET Core 2.1 The libuv transport was marked as obsolete in .NET 5 and has been removed as of .NET 6, I believe.


Ah, good to know! I'm barely C#-literate, so the Socket docs[0] aren't totally intelligible to me. Somewhere under the hood, though, there must be a common abstraction sitting in front of the relevant Windows and Linux syscalls - did the libuv stuff just move down a few layers, or is this using something completely different?

Either way, I'd be shocked if this weren't designed top-to-bottom to have excellent performance on Windows.

[0]: https://docs.microsoft.com/en-us/dotnet/api/system.net.socke...


You can find the Socket partial class files here: https://github.com/dotnet/runtime/tree/main/src/libraries/Sy... Socket.cs, Socket.Unix.cs, Socket.Windows.cs, etc...

AFAIK, there are no parts of libuv left in .NET Core. That was mostly used to quickly get the framework up and running on Linux.


The benchmarks seem to show YARP doing around 35% more requests per second, having around 25% less mean latency, 20% less 90th-percentile latency, and 10% less 99th-percentile latency for http-http. Moving to https-https, it's around 10% more RPS, 10% less mean latency, 15% less 90th-percentile latency, while Envoy has 10% better 99th-percentile latency.

I'm not sure where you're seeing 2x, but it might be the scale of one of the charts you're looking at being deceiving or I'm not looking at the same test you're looking at.


I don't think anyone is using Envoy for peak performance. It is slower than haproxy and nginx but it has other advantages:

* it is super extensible * truly open-source and open to contributions by companies * the codebase is really modern and easy to reason about


Ah, beat me to it :-)


I’m on mobile, and that page is very slow and very tiny. Anyone got a TL;DR?


50% slower than nginx and haproxy


Can you even interpret graphs?


https://devblogs.microsoft.com/dotnet/announcing-yarp-1-0-re...

More here: https://github.com/microsoft/reverse-proxy/issues/40

It was hard to read on my phone, so I don't know if it shows what you want to see.


Only available in .NET ecosystem, got excited after I read the title, but forgot Microsoft is Microsoft.

"YARP is a reverse proxy toolkit for building fast proxy servers in .NET using the infrastructure from ASP.NET and .NET."


I don't get it, what do you even expect them to do? Write it in Java? It's MIT-licensed, you can rewrite it if you wish.


Yeah - whatever language they chose it would be "only available for [that] ecosystem." At some point you've got to pick your home, not like you can write a 100% cross-language framework for anything.


Hopefully one day most devs will share one home called wasm runtime. Although not sure if it applies to this scenario (lower-level code).


WASI? there are already prototypes of running .net in WASI that might make it for .net 7

And WASM is already supported, app size about 1MB including runtime and GC


It’s written in C#, probably because they use a bunch of C# internally for this sort of project (who’da thunk it?). That naturally restricts the project to .NET environments.

That line could easily have read “for building fast proxy servers in the JVM” or “for building fast proxy servers in node.js” or whatever else, and it would’ve still been just as restrictive.


Still more flexible than the "app" style proxies. And there is ongoing work to allow c export and not just imports in C#. Which means you could make it a library for any language, but of course with your own custom interface since you can't call the C# apis unless you export a function to C.


anything that requires a giant runtime to run is a big NO for me

that's why languages like GO makes more sense for that kind of use cases

microsoft not wanting to support CoreRT back in the days was their biggest mistake ever, shame on the people at microsoft who lobbied against it, shame!


They are working on Native AOT for .net 7, which is based on CoreRT. See the issues tracked in the runtime repo.

I have been using it with reflection fully disabled in a console app and it produces a 5Mb native binary.


ok that is VERY encouraging to hear!

thanks for letting me know about that

EDIT: found it: https://github.com/dotnet/runtime/issues/61231


I wouldn't say it's a giant runtime. A trimmed Hello World .NET app is ~11MB these days - not quite as good as Go, but you can make .NET apps that do quite a lot in under 20MB.

And sizes will be going quite a bit lower with NativeAOT this year.

https://www.awise.us/2021/06/05/smallest-dotnet.html


My first thought is "wasn't that called ISA?"

I can't help but be wary of anything Microsoft announces that's open source. Without engaging in MS-bashing, I always wonder what their motivation is. Then again, part of me wants to believe there's good, avid coders working there.


1997 called and want their outdated MS opinion back.

MS is one of the largest open source contributors these days. .Net is open source and cross-platform. They even have an open source Linux distribution. They actively contribute to Rust, webAssembly, the Linux kernel, and Kubernetes.


Yeah, right. This was two months ago: https://news.ycombinator.com/item?id=29579994





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: