You could still develop stuff for server OS while having the ability to play games without having to reboot or use wine inside linux.
Lots of companies spend a lot of effort to run code on multiple platforms (SQL Server recently announced Linux support; .NET core has supported runtimes on Linux too and tons of OS languages have runtimes for multiple platforms). It would be great for both devs and end-users if the number of things that are different between platforms was reduced.
This sounds either a bit like CoLinux, or like the POSIX subsystem revived. Remember: Windows has kernel support for different userspace APIs, and the well-known Windows API is just that: A user-mode subsystem running atop the kernel (there have been OS/2 and POSIX subsystems before).
I will love it but not having a graphical interface limits the added value. Currently the main problem with running a desktop Linux in a VM is the limited support for 3D/2D that increased CPU usage making your whole computer unusable.
On the server side Hyper-V in Windows 10 is a partial solution that already works.
Maybe things have improved since, but at least a few years ago, it was always a crapshoot to try to get some new open source tool set up on Windows/Visual Studio, vs. batting close to 1.000 on Mac with configure && make && make install.
Another way to put this is that the world Terminal.app gives you access to is a huge selling point for developers, and this is part of Microsoft's attempt to provide something as useful.
On a more serious note - Unity is something I deeply hate, and a lot will depend on the quality of the implementation.
It will be good to develop easily for linux too.
For anybody else, just search this on Google: 2d 3d cpu ubuntu (virtualbox OR vmware OR "hyper-v")
> https://www.samba.org/samba/news/articles/low_point/tale_two...
> http://brianreiter.org/2010/08/24/the-sad-history-of-the-mic...
Edit: yes, that is in fact exactly what the first link you gave says: The POSIX subsystem was added as the POSIX standard had become very prevalent in procurement contracts. [...] This original subsystem was, I think it's fair to say, deliberately crippled to make it not useful for any real-world applications. Applications using it had no network access and no GUI access, [...] SFU contains a full POSIX environment, with a Software development kit allowing applications to be written that have access to networking and GUI API's.
http://www.colinux.org/?section=home
I guess it's possible to do something like this again.
If you can have all the comfort of Linux (huge catalog of software that are easy to uninstall, network transparency, ...) with the assurance that your hardware will be fully supported by the OS, it would deserve a try.
I've noticed since about 2013 that I booted into the VM less and less often. The most recent was after maybe 9 months without using Windows? I wanted to check how something related to batch files worked, purely for curiosity (i.e., unrelated to professional work). There being so many updates queued up that I almost said "screw it" to the whole thing, reasoning that I could ask a friend to check easier/faster than the wait was worth.
I'm not trying to wrinkle anyone's shorts, but this just makes a lot of financial sense. Let the "community" do most of the OS development and only maintain the Windows UI. This allows them to focus more on services and Azure.
I personally think .NET is much worse than any of the more common web languages (even PHP or Perl) for the web. If I were writing a Windows application then I'd probably write it in .NET using Visual Studio, but not a web application.
As I said in my original comment "Different strokes.", you may like .NET. That's fine. It might be the right choice for you and the wrong one for me. I was more commenting that it was amazing to me that someone would think it was awesome because it sounds like the complete opposite to me.
I guess I should have asked what you find compelling about writing web applications in .NET.
As long as you consider OSX to be a Linux distro (lol) with a Apple UI, then sure.
But I doubt Microsoft ever gets any closer to unix-like systems than Apple is.
What does this give you that you would not already have with cygwin? The latter installs .exe versions of the usual command line utils, and I'm almost certain ZSH and the others you speak of are included.
I do not understand the practical implications of this move by Canonical/MS other than PR - what's actually changing from a user/dev standpoint?
I think the HN intolerance towards Microsoft / zealousness for Apple is showing here. Certainly .NET isn't for everyone, but I don't think "is not a good web development framework" is justified. Check out http://nancyfx.org/ if you're looking for something more lightweight than the full ASP.NET / IIS stack.
I have issues with Microsoft's MVC (mostly that there is no official way of splitting it across several solutions and keeping working routing) but I've never found it overkill for enterprise-style webapp development.
We used MVC/Entity Framework. It works well as a RAD for the backend with full HTML/CSS/JS for the front end that we can get creative with. Reminds me a lot of Java development.
But today kernel software is practically commoditized by Linux. Competing feature wise is a fools errand - it's just too costly and slow to go it alone.
FreeBSD could be another choice also. Lots of industry support.
The MVC model itself is not overkill, sorry that sentence was not clear. I should know better than make contentious comments on HN that are going to spawn a bunch of aggressive responses when I'm trying to start my day.
Wine, and for the troublesome apps: VM.
This all not to say that Microsoft tech in these low level areas don't have advantages over linux or are bad, but it'd be nice to have it at a low level.
As it is, it looks more like a Linux environment on Windows. Analogous to a Cannonical supported version of Cygwin.
I'd love to see Windows as Linux distro because I'd prefer to give full access to my hardware to Linux and only pull out a Windows environment when an application requires it. Desktop Linux users are in the minority though, so I expect there's a lot more demand for the reverse.
My only real problem with Cygwin is, that it misses a command-line package manager. If they could adopt pacman for package management like MSYS2 does, I'd be a happy camper.
edit: To deploy Cygwin based applications you need to get a commercial license from RedHat (if it's not FOSS). Which could be a deal-breaker.
accounting? web based
service reports for moonshine work? Office 365 online or Open-/Libre-office
gaming? Steam has worked nicely on my not too beefy desktop for years (I only play CS:GO though)
Today I'm back on Windows 10, mostly, since Windows 10 is less annoying and my current employer don't care if I have a personal account on my new nice laptop.
[0]: Work for NotSoBigCo between 8-16
Pushing operating systems under the abstraction is just the next step after decoupling Windows from hardware. In a sense that's been a theme for Windows since the development of .NET.
The value of Windows has been as an ecosystem and it almost certainly will remain one. The tradeoff of running Windows is a tradeoff and it comes with big advantages for some users.
I was also running into Haskell compilation problems that were fixed by running Ubuntu in a Vagrant environment but speed was slow. There isn't good NFS support on Windows either (there is some).
I don't run Windows and consequently haven't used VS in any kind of intimate detail, I'm sure it's great if you like dealing with IDEs. I feel more productive with Vim, tmux, GHCI, and GraspJS for doing of my web development.
You have not given any solid technical reason as to why ASP.NET is a bad framework. In my experiences, it's more or less as capable as Ruby on Rails, Clojure, Java, etc. You've stated it's overkill, meaning what exactly? Are you even aware of the changes being made to ASP.NET vNext? The dotnet cli tool? The only complaint you seem to have is that the tight coupling of ASP.NET to various Windows platforms is a little much for people who are used to Go or RoR.
But, you don't have to use mvc; there's Nancy or low-level Owin. So why do people complain about MVC when there are other choices? Certainly not like in other platforms, but at least few good ones exists! Why judge whole platform because of one fx?
Similar like EF or Nhibernate. They are big and heavy and very slow if not used properly, but also there's Dapper, massive or simpleData.
OSX is better because it doesn't feel too different from Linux (aside from setting docker machine ENV variables). Still virtualized so you take a performance hit.
for an example of how to debug using gdb in Visual Studio. Having an integrated Linux environment, would make this support seamless. Then Visual Studio becomes the cross-platform hub for building code for Windows, Linux, iOS, and Android.
In retrospect it's not entirely VS's fault, though I just found it amusing how quickly it ate through my storage when Vim only takes like 90 megs.
Naturally, this is one of their alternative methods.
There is babun (https://babun.github.io/). It is essentially a wrapper around cygwin and comes with a package manager.
I wonder what will happen to Powershell now.
Windows still has a ways to go. I think this might make some Windows stuff easier to deal with, but I still prefer jobs where I can run Linux natively on my workstation.
Did they have to contribute patches to bash, or just install it by default? I don't see anything on the bash mailing list, but the development is not particularly open.
Apple stopped updating Bash in OSX when the upstream license changed from GPL2 to GPL3, I believe. (Fortunately, they keep the bundled zsh more up to date)
This isn't news. This is Microsoft up to the same dirty tricks they pulled in the 90's to try to kill UNIX.
OSX is a thin layer of UNIX with a lot of non-UNIX like stuff. Aqua over X. Self-contained apps over package management (unless you want to count the app store).
I find it more like a broken borked *NIX system than anything.
http://www.hanselman.com/blog/DevelopersCanRunBashShellAndUs...
"This is a real native Bash Linux binary running on Windows itself. It's fast and lightweight and it's the real binaries. This is an genuine Ubuntu image on top of Windows with all the Linux tools I use like awk, sed, grep, vi, etc. It's fast and it's lightweight. The binaries are downloaded by you - using apt-get - just as on Linux, because it is Linux. You can apt-get and download other tools like Ruby, Redis, emacs, and on and on. This is brilliant for developers that use a diverse set of tools like me."
"This runs on 64-bit Windows and doesn't use virtual machines. Where does bash on Windows fit in to your life as a developer?
If you want to run Bash on Windows, you've historically had a few choices.
Cygwin - GNU command line utilities compiled for Win32 with great native Windows integration. But it's not Linux. HyperV and Ubuntu - Run an entire Linux VM (dedicating x gigs of RAM, and x gigs of disk) and then remote into it (RDP, VNC, ssh) Docker is also an option to run a Linux container, under a HyperV VM Running bash on Windows hits in the sweet spot. It behaves like Linux because it executes real Linux binaries. Just hit the Windows Key and type bash. "
I'm genuinely very tired of OS X, which (to my perception at least) has gotten steadily worse with every version. I for one will be happy to switch.
Some more info on this
I've been using it for a while now for a lot of non-git stuff too and I'm quite happy with it.
For example, this will probably help expose and fix lots of bugs in Microsoft's implementation of Linux interfaces, which will be a benefit to free software developers and vendors.
Also, general users will get more exposure to free software programs, and may be more open to buying a legit Ubuntu or other Linux computer in the future. For example, I was able to switch my wife over to using Linux Mint without any issue, which was undoubtedly made easier by the fact that she was already using LibreOffice, Thunderbird, and Firefox on her Windows PC.
It seems like people are able to pretty easily run free software programs on Mac OS X, and all things being equal I think that has been a great benefit to free software, and a lot of web developers et al seem to be willing to make their program free software friendly and release them under free software licenses. I would love to see a similar trend with Windows, even if I personally think that proprietary operating systems are extremely harmful and need to go the way of the horse and buggy.
In any way, for me, this is a great news. The fact was really hard to work with python/ruby/node/etc under windows and the fact I hate powershell were the two main things about why I work on a linux os all the time.
Will reconsider a windows laptop again next, if build quality and battery life are comparable.
No much information for now, but that seems to be a little revolution.
Recently, MS is making all the right moves technically, but they've also doubled down on the spying, pushing W10 without consent, and still in bed with NSA, so they are still out of consideration for me.
A shame really. :(
Bash alone isn't that useful, you'd need other stuff and you can already get that from other sources. There's also a distinction between a terminal program and shell. Bash is a shell. iTerm2 is a terminal program. Cmd.exe is both?
% uname -a; bash --version; zsh --version
Darwin hostname 15.4.0 Darwin Kernel Version 15.4.0:
Fri Feb 26 22:08:05 PST 2016; root:xnu-3248.40.184~3/RELEASE_X86_64 x86_64
GNU bash, version 3.2.57(1)-release (x86_64-apple-darwin15)
Copyright (C) 2007 Free Software Foundation, Inc.
zsh 5.0.8 (x86_64-apple-darwin15.0)
I think part of my objections are that OS X used to be absolutely rock solid, around the Snow Leopard era. An entire release dedicated just to tuning up the OS! Unheard of now - I will never install a new version until the x.1 patch is out, there are always huge bugs.
I don't expect Windows to be rock solid, I just don't expect it to be any worse than OS X any more.
I had always hoped this would happen when I was younger, and now it's finally here.
(EDIT: this would also require ELF loaders and all kinds of other good stuff, but still a possibility IMO)
Android's license is "you need to put the Google Play Store and the Google App ecosystem on the phone". Window's license might still be, "pay us money".
You wouldn't pay for the kernel (because of GPL) but you would pay for branding and support (like RHEL) and you would pay for the ability to run the "Windows Application Compatibility Layer".
$> uname -a
Darwin K2523 15.4.0 Darwin Kernel Version 15.4.0: Fri Feb 26 22:08:05 PST 2016; root:xnu-3248.40.184~3/RELEASE_X86_64 x86_64
$> which bash
/bin/bash
$> bash --version
GNU bash, version 3.2.57(1)-release (x86_64-apple-darwin15)
Copyright (C) 2007 Free Software Foundation, Inc.
The latter would be much more interesting to me, since what I really want out of "running linux on my desktop" is for it to actually act like the linux machines my code is targeting, and I'm dubious that a syscall layer will achieve that to the degree I want.
I got a Mac primarily because of its linux side, but it is actually Linux.
This is still Windows, but with a Linux "side" to it? If I apt-get install redis, do I make it startup like I would in linux, or do I use windows services? In the screenshot there's a /mnt directory, is that behaving the same as it does in linux?
This is so confusing... but if it's legit, then I would actually look at switching back to windows.
ugh
Windows also offers a lot more customizations than OS X.
However, there are things I like about OS X also, like spaces and multi-touch trackpad support.
I use both on a daily basis.
No part of Mac OS comes from Linux at all. Also, most of the standard tools are from BSD, not GNU.
Maybe you meant "it's a UNIX-like system" but those predate Linux by 30 years or so, and at any rate Windows + Cygwin was already a UNIX-like system to a similar extent, so that's not really relevant to what was accomplished here.
Pretty excited about this, given that I've been using Cygwin for ages and always found it a pain to hit those edge cases where it didn't work.
Stable USB stack would be nice as well. Ever since El Capitan, virtual machines I run off USB drive have been getting random I/O timeouts.
OS X tends to need quite a bit more memory than Win10. Win10 is as usable on 2 GB RAM as OS X on 4 GB. OS X graphics driver is also pretty slow, some 30% slower than on Windows. OpenGL support is pretty bad on OS X.
On Windows 10 side my biggest issues are unstable (or temporarily unavailable) RDP and bluetooth stereo audio stuttering. RDP color accuracy leaves also a lot to be desired.
NT system calls are not exposed to userspace; only system-supplied dlls can use them. It is done by changing syscall codes for every build, so non-system app would never know, which syscall number to use.
There is a distinction because you will notice much larger differences between Mac and [insert linux distro here] compared to linux distros to each other.
However, I don't know if this is the case. I remember in the livestream Meyers saying that they will enable you to choose any shell you want "powershell, dos, bash, and more coming soon". If they just supported _any_ ubuntu binary natively, I don't know why he would've said "more coming soon"
On my i5-3550 with 16 GB of RAM and an SSD it takes a couple seconds to start the first time and less than a second for subsequent times.
Both machines are running Windows 10.
Right now, the machine with the spinning rust is loading a bunch of files with an I/O priority of "background" because it just got booted into Windows; that might slow it down a bit because of the seek times and I don't know if Windows is willing to starve background I/O for seconds at a time to speed up interactive requests (I doubt it).
Update: once all the background preloading is done, PowerShell restarts in three seconds on the spinning-rust machine.
Long story short, I think getting an SSD will be the thing that makes PowerShell start acceptably fast.
That explains why Wine always seemed so buggy, though.
This just shows how standards are made... implement first then think about it later :) I don't think the Linux syscall interface is the model of clarity, but that's what we have.
EDIT: This answers my original question... apparently they didn't patch bash -- they patched their own kernel to run the bash binary, and all Linux binaries! It was done at a binary level rather than source level.
They are trying to introduce bash as native shell as another option. That's all.
Yes; full, standard, repo access [1].
> With full access to all of Ubuntu user space > Yes, that means apt, ssh, rsync, find, grep, awk, sed, sort, xargs, md5sum, gpg, curl, wget, apache, mysql, python, perl, ruby, php, gcc, tar, vim, emacs, diff, patch... > And most of the tens of thousands binary packages available in the Ubuntu archives!
[1] http://blog.dustinkirkland.com/2016/03/ubuntu-on-windows.htm...
This is a real problem for me as well. Not enough to make me want to ditch my Mac, but it's a real PITB.
What's next, RedHat on Server?
You're getting confused with MinGW, which uses MSYS to build native Windows executables. They need MSYS (as a Cygwin-derived emulation layer) because tools like GCC or Bash expect the system to support POSIX APIs and have POSIX semantics-- for example, Windows has no equivalent to a POSIX fork() call. The code you're compiling under MinGW has no MSYS or Cygwin dependencies, but the compiler and tools themselves (gcc, bash, the linker, etc.) do.
The interesting part is "Ubuntu will primarily run on a foundation of native Windows libraries."
If this is true Canonical is playing with fire. That could be the embrace step of the usual embrace-extend-estinguish script. Think what happens when Microsoft adds some new functionality to those "native Windows libraries" and Ubuntu/Windows is extended to use it and Ubuntu/Linux obviously not. If (when?) the majority of Ubuntu's users will be on Windows Microsoft only needs to start developing its own Ubuntu and cut Canonical out of the loop. If there will be a significant amount of Ubuntu/Windows servers by then, very little will be left for Canonical. Only the cost of Windows licenses can save Canonical on the server. The desktop will be lost given that most of Ubuntu's desktops are born as Windows machines. The more convenient way will be to add Ubuntu/Windows to them and keep a Windows OS for games or just in case you need some Windows native application.
Maybe Canonical is thinking about leaving the desktop and focusing on the server. Still it's a risky move.
Another interesting post of October 2015 http://www.linuxjournal.com/content/ubuntu-conspiracy "The word is that Microsoft is in secret negotiations to purchase Canonical." Maybe they're playing the Elop move without having to change CEO.
And even though it is 3x3 MIMO, copying 20GB vm images is not something you want to do over wifi, so in the end I've got the Thunderbolt Ethernet adapter. Works like a charm, shorted the transfer by more than 10x.
SMB by itself never gave a problem (clean install of 11.0, then continuously updated to 11.4).
Not sure about X11 apps, but whatever. Largely this makes running a special win32 build of redis for whatever dev you're doing unnecessary.
I'm currently running windows on this laptop, but I have a virtualbox instance running Lubuntu for doing any UNIX specific dev. Ports and files are shared across windows and linux transparently, which means there's far less need for need for running+maintaining a separate developer's VM.
Cygwin and things like colinux exist on opposite ends of that awkward divide, but something officially supported by the OS could maybe straddle it better.
> Microsoft research technology to basically perform real time translation of Linux syscalls into Windows OS syscalls
From what I can gather Microsoft is paying Canonical to help with a few user-mode bits and the Windows apt-get stuff uses the official Canonical sources.
As I said to him: "NT is the only major OS I know of that has always had personality subsystems. Cutler’s vision finally pays off after 3 decades of waiting"
Applications: Now you can install and configure applications like apache, postquesql, etc. on windows in the same way you do in linux/other unix platforms.
Strategically, this is a big win for Microsoft. Now they can go to their clients that are moving or thinking about moving to Linux and tell them "There's no need to migrate, just install your apps in Windows."
Sorry it's not recursive.
The linux module provides limited Linux ABI (application binary inter-
face) compatibility for userland applications. The module provides the
following significant facilities:
+o An image activator for correctly branded elf(5) executable images
+o Special signal handling for activated images
+o Linux to native system call translation
It is important to note that the Linux ABI support it not provided
through an emulator. Rather, a true (albeit limited) ABI implementation
is provided.
https://www.freebsd.org/cgi/man.cgi?query=linux&apropos=0&se..."Mapping syscalls from one OS to another" was really just an example to give the OP an idea of how this sort of thing works without a VM.
Edit: Nevermind then
I worked at a place that developed a "Linux on Windows" thingy back in the Windows XP days. It was essentially like WINE. A user-mode Windows program would load the Linux binary into the Windows program's address space and execute it, trapping any attempts by the Linux code to issue system calls, and the Windows program would then service those system calls.
For non-GUI stuff this worked remarkably well. I was able to grab the binary for rpm off of my Red Hat system, and the then current Red Hat distribution disc, and install successfully almost all of the RPMs from the disc and have almost all of the non-GUI ones work.
I had expected big problems from the case-insensivite vs. case-sensitive filesystem issue, but in practice there were only a handful of things that ran into this. Mostly Perl stuff that used both "makefile" and "Makefile".
GUI stuff was another matter. We could run XFree86 under Cygwin, and then the Linux apps under our WINE-like program would work. However management was not keen on the idea of including Cygwin and XFree86 if we turned this thing into a project. Also, we wanted an X server that would fit in better with a mix of X and native Windows apps running at the same time.
I spent a while trying to write a Windows X server straight from the official specifications. I got as far as being able to get xcalc to display a window and all the controls to show up right, but weird things happened with events. Everything looked fine when I packet sniffed the communication. I still had not figured this out by the time management decided that this whole thing did not have enough of a commercial market to continue the project.
If there is one thing Microsoft has always delivered, it is strong developer tools. And this is another case, as developers now don't have to fiddle with a VM to get a Ubuntu test environment going before stuffing things into a container.
Every embrace of Linux Microsoft has been doing as of late can be traced back to their Azure cloud service.
Running the Linux kernel on Windows has been done before IIRC - coLinux?
"Of course, I have no idea how to CLOSE emacs, so I'll close the window. ;)"
Seriously:shows an appreciation of where a lot of the workload for computer programmers is these days.
I suspect RAM issues. OS X isn't great if - say - Chrome eats all the memory. And if the RAM itself isn't rock solid, you will get crashes.
A lot of issues went away when I installed 32GB.
And Ubuntu's biggest "market" these days is not desktop, but as a container base.
So this is MS getting cozy with Canonical to offer a development environment for Ubuntu based containers destined for Azure.
This has been done before with other x86 OSes: FreeBSD has had 32-bit ABI compatibility for at least a decade (https://www.freebsd.org/doc/handbook/linuxemu.html), and the "lx branded zone" for Solaris also has 64-bit support (https://docs.oracle.com/cd/E19455-01/817-1592/gchhy/index.ht...).
It looks like Ubuntu was the first to package some Linux binaries for Windows. I guess that's useful?
Can't get node.js to run on your "Linux environment" and access a database running on windows? Good luck finding an answer for that on stackoverflow.
You'll have to target yet another environment for any app you develop. Will it be running on a Windows server? A Linux server? A server running "Windows with bash"?
This version is taking the native Ubuntu binaries and executing them directly against the Windows API via a real time translation later.
The difference is like playing a game using virtualization technology vs. WINE. As the complexity of the game increases, the former begins to slow down and break.
Transfer speed after overhead over 11ac 867 Mbps wifi is usually 400+ Mbps.
No packet loss (or at least it's below 0.1%).
I have tried to figure out why I want a non-mac for my laptop and concluded I just like change... :) I was almost settled on that dell xps with ubuntu, but if the Surface Books get thunderbolt 3 and this before the autumn I am pretty sure I can't resist anymore...
EDIT: typo
You might hate the terminal where PowerShell runs, but I don't think you hate Powershell.
I see a future where devs move to Windows due to Bash and stay due to PowerShell.
Left click in fwvm, select xterm, window appears in less than my blink response time.
Seriously: I think I might pop Win10 on an old Dell i5 that came with Win7 and play with this.
Users with zsh as their default shell are a tiny minority.
Now, have you tried these experimental approaches with Unity on VMware/VirtualBox/Hyper-V? Please let us know your results so anyone can benefit of that.
It's such a strange though to me, the idea of being a sys admin and clicking around in a Windows computer... is that how it happens, or do most Windows admins use one of the CLI tools mentioned in the article?
In fact the binaries, that are compiled with MinGW, link with MSVCRT by default (Microsoft Visual C Run-Time DLL). So no compatibility layer, and they don't rely on Cygwin.
Not the person you're replying to, but interesting ...
>tools like GCC or Bash expect the system to support POSIX APIs and have POSIX semantics-- for example, Windows has no equivalent to a POSIX fork() call.
So do Cygwin and/or MSYS emulate the fork() call on Windows? and if so, do you have any idea how that is done? Just interested, since I have a Unix background - not at deep OS level, but at app level and also at the level of the interface between apps and the OS (using system calls, etc.).
If you plan on using the Linux environment and having it interact with the Windows environment you're going to have the same limitations that you would with a VM, OR you'll have to change your workflow because the way a program running under a Linux environment interacts with some windows service is going to be a completely new thing.
Will I be able to use a windows only service to interact with a command line program written in python running in the Linux layer? If I can't interact with the windows layer completely then it's very much like a VM or a container running inside a jail.
What happens when I install python or nodejs and stuff just doesn't work right? Like say I have a database running on windows and I want to interact with it with python. Will I have to rely in Windows making sure the compatibility layer always work?
Cygwin does some pretty horrific hacks to emulate it. It basically creates a paused child running the same binary, fills in its memory, stores the register context of where it came from in a shared memory, and then resumes the child. The child on startup detects that it was forked, and then looks into shared memory to resume running at the place of the fork.
edit: It's even worse than I remembered: https://www.cygwin.com/faq.html#faq.api.fork
It's always been possible to run Linux programs under Windows: just run a VM. What Microsoft has done here makes it less painful to run Linux programs, sure, but these programs still exist in their own little world. Cygwin programs, on the other hand, are Windows programs. To me, that makes them much more useful.
Now, maybe I'm wrong. Maybe the new Linux subsystem is more tightly integrated with the rest of the system than I'm guessing. But based on the available documentation, it looks a lot more like SFU or Interix than it does Cygwin, and that's a shame, because if I'm right, Microsoft misunderstood the whole point of Cygwin. Again.
As to why people are considered "hard core" when they use the cli, it must have to do with the (somewhat false) notion that one must have a special kind of mind to be able to remember all these commands. Most cli users know that they're not geniuses, they just had the patience to go through a tutorial or read parts of the manual. Then they repeatedly used a small set of commands that they need almost daily and that stuck to long term (or muscle) memory. Over time they took some notes when they encountered handy but seldom used commands. They've done this for many many years with many many tools. Look over their shoulders as their little fingers do their thing and mistake their craft for wizardry.
However, "Linux" is almost always a reference to GNU tools and the Linux Kernel. It may not be semantically accurate, but take that up with the same people that made literally mean both itself and its opposite.
Microsoft has already gone out of it's way to take control of the hardware and kernel (think secure boot on intel and the _total_ control on arm). They're now allowing you the privilege of running some posix userland applications (which have no real power) so people don't complain too much when they make it impossible to boot a custom kernel on newer hardware.
"What do you mean you can't boot linux? Don't be silly, you're already running ubuntu!"
Jeffrey Snover [MSFT]
Yes. That's one thing we spent considerable engineering effort on in this first version of the Windows Subsystem for Linux: We implement fork in the Windows kernel, along with the other POSIX and Linux syscalls.
This allows us to build a very efficient fork() and expose it to the GNU/Ubuntu user-mode apps via the fork(syscall).
We'll be publishing more details on this very soon.
In case you've not noticed, this is a very, VERY different Microsoft, one I re-joined recently precisely to work on this very feature! :D
First, PS Remoting is terrible compared to SSH. Commands fail randomly on a small percentage of systems, and the only options to troubleshoot are to login manually to the remote system, and try a number of things, including rebooting the remote system (not very good for servers).
Second, debugging is terrible compared to bash - sure PowerShell ISE allows you to step through your code line by line, however, I don't have anything like "bash -x script.sh" which lets me see the actual execution and return code of every line of my scripts.
Third, bash has a much simpler way to chain output through multiple programs using pipes and treating input/output as simple text. PowerShell is a pseudo programming language with objects and other data types that just don't enable this type of chaining in the same simple and easily understandable way.
It took me weeks to write a PowerShell script that used PS Remoting to loop through a list of provided servers, install a service, set the RunAs user, start the service, create a secret file, and EFS encrypt that file as a specific user. I could have written the same script in hours using bash for Linux boxes. It would have been much more efficient by using tools like GNU parallel.
I'm not sure how anyone that's used both PowerShell and bash for any serious work could say PowerShell is better, unless they're a .NET developer and appreciate being able to blend .NET objects into their scripts, but to me, that just breaks the simple modular composability of the *nix philosphy.
The Windows POSIX subsystem which shipped in NT 3.5.1 was a minimal implementation of POSIX syscall API plus a userland toolset. That was replaced with Interix which was renamed Services for Unix (SFU) which had a more comprehensive kernel implementation and more up to date userland. However that tech was not resurrected to build the Windows Subsystem for Linux (WSL).
Importantly, WSL doesn't ship with a distro - we download a genuine Ubuntu userland image at install-time and then run binaries within it.
Let's try the opposite. Say someone got Wine working to the point where it was very nearly, perfectly indistiguishable from Windows and they put up a blog post saying "Everything works just as it should under Windows, because it is Windows." Microsoft's lawyers would come around with a C&D, and calling them pedants wouldn't invalidate their case.
He could have said "it's just like Linux, right down to the kernel interface" or "Everything works just like Ubuntu, because the userland is Ubuntu". Succinct and correct. Precision matters.
Windows Subsystem for Linux (WSL) which underpins Ubuntu on Windows is new Windows kernel infrastructure that exposes a LINUX-compatible syscall API layer to userland and a loader that binds the two.
This means you can run real, native, unmodified Linux command-line tools directly on Windows.
You don't have to be a fan of Microsoft but they deserve at least some credit.
Debian GNU/kFreeBSD is Debian, but it isn't Linux.
Mac OS X with GNU tools via MacPorts/Fink/Homebrew is OS X with a GNU userland, but it isn't Linux.
Windows 10 with an Ubuntu userland is Ubuntu, but it isn't Linux.
Linux is a kernel.
Knowing this, what should we call it?
Windows Subsystem for Running POSIX + Linux Syscall API Compatible Userland Tools? WSRPLSACUMT? :)
I'm genuinely interested on what you all feel would be a good way to think about naming moving forward.
EDIT: idiom
It is not built to support GUI desktops/apps. It is not built to run production Linux server workloads. It is not suitable for running micro-services or containerized environments.
Again - this is A COMMAND-LINE-ONLY DEVELOPER TOOLSET!
Can you elaborate?
I'm not aware of any physical Android devices which are able to boot and function as an Android device (e.g. with a hardware accelerated GUI, can make phone calls over GSM/CDMA) that don't require proprietary vendor blobs.
I am excluding the Android emulator because I don't think it qualifies as a "device"
[1] http://www.slideshare.net/bcantrill/illumos-lx
[2] http://us-east.manta.joyent.com/patrick.mooney/public/talks/...
The beta announced a few days back by Docker uses HyperV to boot a Linux kernel to run Linux Docker.
The preview announced around a year ago by Microsoft and Docker is a native Windows implementation of Docker, running on the next Windows OS.
[later]
But...
This new layer should let you run most Linux containers straight on top of the next Windows.
Interesting...
More questions: Will it be backported to Windows 8.1? How does it differ from CoLinux and andLinux?
This is pedantry, and certainly there's a sliding scale of openness for devices. But unless I'm very mistaken, there are no devices available that even approach 'purely FOSS'. What would such an 'Android' phone even be? No google play services, no google play store, crippled and buggy open GPU drivers, and still a proprietary baseband. Not that I'm happy with this situation it just seems impractical.
In what way is a "subsystem" different than a "library" or a "process" or a "driver" (if it runs in kernel space)?
Any process can use the native API. What's special about a "subsystem"?
I wonder how they will make Ubuntu happen on Windows. Reading some of the comments, some speculate a subsystem, while others suggest an interoperable interface.
Edit: reading bitcrazed's comments it looks like it will be implemented a la WINE. No need to recompile binaries made for Linux x86; you'll be able to run apt packages from Ubuntu out of the box.
> A team of sharp developers at Microsoft has been hard at work adapting some Microsoft research technology to basically perform real time translation of Linux syscalls into Windows OS syscalls. Linux geeks can think of it sort of the inverse of "wine" -- Ubuntu binaries running natively in Windows.
http://blog.dustinkirkland.com/2016/03/ubuntu-on-windows.htm...
https://en.wikipedia.org/wiki/UWIN
I've used it some on Windows earlier, and it worked pretty well. Might still available if anyone wants to try it. The only small issue I had was that the process to download it from the (AT&T) web site was slightly involved, for no good reason, as far as I could see. But not difficult.
This is the ISE in a default configuration https://imgur.com/xz9Kfpt On the left just an open terminal, in the middle a script which can be edited and executed at any time with F5, and on the right all the powershell commands which could be either immediately executed or inserted into your script with ease.
Unless you need tab browsing that much, which you can get via addons, the ISE is one of the best "terminals" out there imho.
It is still somehow a secret that PowerShell even exists. I can't count the number of Windows users I know that still open cmd.exe, or install cygwin so they can grep files.
I'm just saying that dealing with text is inferior to .NET objects on windows, and the PowerShell pipeline is much more powerful. However, I don't know how many people will end up learning this cause, "hey I can just use bash!"
I know that Dave Cutler was heavily involved in designing NT, and was earlier the same with DEC VMS. (Had read the book Inside Windows NT.)
But don't know what vision you refer to. Was it about the personality subsystems?
So can we use WSL by itself and pick a different distro, if we'd rather use say Alpine or openSUSE or Arch's userland?
BSD was UNIX, yet neither of it's 2 prevalent derivatives (FreeBSD and OpenBSD) has applied for certification. They are classified as Unix-like; the same is true for any Linux distribution.
To some people, this may be semantics, but one of the reasons that drew me to OSX was the certification.
As people have mentioned the biggest factor here is probably your hard drive since you are loading maybe couple of 100's of small files when you load the ISE.
As others have mentioned throughout this thread, PowerShell isn't going anywhere. We're investing considerably in the PowerShell ecosystem. PowerShell/WMF 5.0 just came out with a ton of new features[1], and we're not slowing down any time soon.
Because it's operating mostly in user mode today, Bash on Windows is much more suited to developer scenarios. I've already played with workflows where I'm running vim inside of Bash on Windows to edit PowerShell scripts that I'm executing in a separate PowerShell prompt. In fact, I can plug along fine in a PowerShell window, run a quick 'bash -c 'vim /mnt/c/foo.ps1'', make a few edits, and be right back inside my existing PS prompt. This really is just another (really freaking awesome) tool in your toolbox.
[1] http://msdn.microsoft.com/en-us/powershell/wmf/releaseNotes
Except that isn't the opposite, it's an entirely different situation.
1) It isn't Windows. It's a complete rewrite of the Windows API's. That is not the same thing that's happening here.
2) A C&D isn't a case, it's a piece of paper (politely) asking you to do something. Calling something Windows and it actually being an infringement on Windows patents are entirely different issues.
> Precision matters.
In carefully crafted theoretical situations feigning as analogies to this situation? Sure. In the real world? Hardly.
As long as Microsoft continues to disrespect the rights of users in regard to privacy, data-collection, data-sharing with unnamed sources, tracking, uncontrollable OS operations (updates, etc) - I will never go near it.
I expect some flack for my position... don't care. I find it especially offensive that ex-open source and ex-Linux users (working for Microsoft) have the audacity to come on here and try to sell this as a 'Linux on Windows' system when most of what makes Linux special (respect for the user) has been stripped away.
It's like giving a man who is dying of thirst sea water.
Most comments here appear to be positive and that's fine... whatever. Please don't sell your souls and the future of software technology for ease of use and abusive business practices. /rant
I'd rather have a home I have control over, or even trust in.
"Problem for software unenlightened by ABI (golang)"
I know they're not your slides, but do you also think Go is unenlightened (which reads: clueless and unaware) or do you think perhaps it consciously rejected the common ABIs?
Things like Node.js already run pretty well on Windows as it is, and MS is building native tooling in Node.js (e.g. their Azure CLI).
With this change, Microsoft is definitely going to encourage a lot of Surface adoption for geeks.
It would seem like this compatibility layer goes some of the way to running Android apps on WP10.
But it might be handy to have a GNU/Linux distro in your pocket, coupled with Continuum, running inside a chroot.
(sure there's debian chroots on Android. There's Ubuntu Touch but I've never seen a retail handset, whereas Lumias do exist.)
Also, large parts of the kernel are from BSD.
because no one is forcing linux users to do anything. can you really not see the difference between giving an option to developers and "dragging linux users"
I feel very comfortable getting a stream of data and treating it with sed, awk, grep and whatever; but once I worked with objects it feels much more data-oriented.
And please don't get me wrong, I've been in love with Bash since a Slackware CD fell on my hands in the 90s. I just was mind-blown by PS after making fun of it for years - just because the default terminal where it runs is less than great.
I guess this thread proves the point of bringing Bash to Windows: Different people, different uses, different needs and solution. And that makes me happy :)
So rather than repeating "Linux on windows", this is "Ubuntu's userland on windows", or "GNU on windows", or any other variation, but NOT "Linux".
As far as being an "influencer"; do you see any links on my profile? Again, that's something other people find appealing, not me.
The demos are VERY convincing. Basically everything works exactly like you would want it to work. It's exactly ubuntu and windows running through the same kernel at the same time.
(preferably for win7, because I run Linux at home and win7 is the most likely platform I may spend some time elsewhere)
I do appreciate, from all I've heard about PowerShell, it might be an interesting environment to try some scripting in. The actual shell/programming language can't be (much) worse than bash--I mean let's admit, bash is pretty ancient and therefore didn't have the advantages of progress in designing programming languages we have made in the past decade(s).
Also, how is that experimental:
https://www.youtube.com/watch?v=37D2bRsthfI
Stop spreading FUD if you don't know what are you talking about.
That's why the child is allowed to do almost nothing:
the behavior is undefined if the process created by vfork()
either modifies any data other than a variable of type pid_t used to
store the return value from vfork(), or returns from the function in
which vfork() was called, or calls any other function before success‐
fully calling _exit(2) or one of the exec(3) family of functions.
Of course, if you're too used to Bash and its paradigm there's a learning curve in PowerShell. Not steep, though :-)
The killer features for me are the objects and being able to seamlessly use C# libraries in my scripts.
Another thing that no-one has mentioned at all is how this pairs up with UbuntuBSD.
* https://news.ycombinator.com/item?id=11326457
Michael Hall of Canonical is quoted elsewhere (http://www.cio.com/article/3046588/open-source-tools/ubuntub...) saying that
> I think it's a cool project and I'm looking forward to seeing how far they get with it. It would certainly be an interesting addition to our already varied list of official flavors, if they can get there.
If one has Ubuntu binaries, one can now run them directly on top of 3 operating system kernels:
* On the FreeBSD kernel, with UbuntuBSD.
* On the Linux kernel, with Ubuntu Linux.
* On the Windows NT kernel, with this new Windows NT Linux subsystem.
So whilst RedHat is busy pushing systemd, and the systemd people are busy pushing a convergence of all Linux distributions into systemd operating systems that do a whole lot of things in the same single way, Ubuntu is apparently taking on Debian's "universal operating system" mantle and extending it to places where even Debian is not.
Everyone is focussing on Microsoft. It's important to remember the "and Canonical".
The actual evidence from history of changing shells, from the Ubuntu and Debian worlds where they actually did make a change of shells (from Bourne Again to Debian Almquist) a few years ago, is that it doesn't drive people away in the first place, let alone away to Windows.
Even if one did a survey to make the latter not unsupported guesswork, one would have (if my experience of StackExchange is anything to go by) to account for all of those who answered that "My shell is Terminal.app." or "I have oh-my-zsh as my terminal.".
And their history is one of appearing to embracing something, and then introducing less and less subtle differences once they have the majority share. Also known as Embrace, Extend, Extinguish.
So it may well be that Canonical have a short term win here, but that in the longer term MS will sideline Canonical as the major share of developers have adopted "MS Ubuntu".
Quite!
But only because the Interix-derived POSIX subsytem was, and is, little-known and vastly underappreciated. Had it been better known, the payoff might have come a decade or more sooner. There are a fair number of questions being asked now, about the new Linux subsystem, where the answer is "No; but the old POSIX subsystem had that.".
* Does it support pseudo-terminals? No, according to the demonstration video; but the old POSIX subsystem did. (https://news.ycombinator.com/item?id=11415843)
* Does it let you kill Win32 processes? No; but the old POSIX subsystem did. (https://news.ycombinator.com/item?id=11415872)
* Does it support managing daemons? No; but the old POSIX subsystem did. (https://news.ycombinator.com/item?id=11416376)
* Does it support GUI programs? No (say the people behind it themselves, although I suspect that it could run X clients); but the old POSIX subsystem did. (https://news.ycombinator.com/item?id=11391961) (https://technet.microsoft.com/en-gb/library/bb463223.aspx)
I, for one, would like to see it resurrected. It's exceedingly useful, and is one major reason that I am not, nor will be, using Windows 10. This new subsystem does not have the things that I use SFU/SFUA for. Nor does it have the BSD-style toolset of SFU/SFUA.
This is part of the long-standing problem for people: this loopy re-presentation of what happens that completely ignores the past and even the present. A lot of us have been using bash and other shells, and indeed vim and other things, on Windows for years. They aren't "coming to Windows". They've already been there for a long time.
We've been able to invoke "vim foo.ps1" to edit our files, and do so without any necessity for an intermediary (and entirely supernumerary) "bash -c" too. I did so myself, only yesterday. This is not the news.
A new "Linux" subsystem is coming to Windows NT that allows one to spawn and to run unaltered Linux binaries directly. Explaining this as "bash is coming to Windows" is to give a hugely dumbed-down explanation, one that is so markedly wrong that it (mis-)leads to the very same mistaken assumptions about the imminent death of PowerShell and so forth that you are now having to counter in several places. (I know. It's not your own explanation. Nonetheless, one should not adopt the error from someone else, especially if one then has to firefight the world leaping to the wrong conclusions based upon it. That's just making a rod for one's own back.)
IT departments not worrying much about what you do as the superuser inside a virtual machine that is running only with your user credentials, is one thing. But tell them that you're now going to be installing and running random Ubuntu softwares, not in a virtual machine but natively within Windows, and they will prick up their ears and start to take notice. Even the ones who are alright about what's being installed will want to think about things like control over what packages can be installed and locally-hosted repositories. "So, tell me how I set group policy for your apt-get installer?"
And if that is not a worry, let me relate some personal experience of using the Windows NT POSIX subsystem. Anti-virus programs, particularly the ones with the whizz-o features of "let's check what 'the crowd' said about this program" or "let's run this program for a little bit in my controlled execution environment to see whether it does malware-type things", don't like this a lot. I had to go through the unblocking of "/bin/foo is a rare program" so often, for everything from "ls" to "ftp", that it was in danger of becoming an automatic reflex.
Goodness knows what the likes of DeepGuard will make of programs that use a wholly new set of system call entrypoints into the kernel. (-:
You do realize that that's the exact opposite of what is happening here, ne? The Windows NT file system is used, and the Linux kernel is being swapped out for the Windows NT one.
"At a low level" NTFS actually is "Unix-y", of course. It had to be in order to support the POSIX subsystems. Case sensitivity, hard links, symbolic links, and a wide degree of freedom for filename characters are all there, at a low level.
What makes you think "maybe"? This is (reportedly) exactly what's happening. What did you infer was happening from the headlined article?
Furthermore, only some of the (free) software bundled up as Ubuntu is "GNU". Being copylefted doesn't by itself make something part of the GNU Project.
* https://www.gnu.org/software/software.html#allgnupkgs
The annoying thing was the Microsoft video where twice Rich Turner of Microsoft stopped Russ Alexander to clarify what was happening and gave incorrect clarifications of how things were "running on Linux ... on Windows". They patently are not "running on Linux".
Microsoft has been quite happy to say over all these years that OS/2 1.x programs were "running on Windows" and DOS programs were "running on Windows" and Win16 programs were "running on Windows" and even Win32 programs were "running on Windows", M. Turner. This is just plain "Linux programs running on Windows". There is no need to make it confusing when it actually isn't. (-:
Gradually all of the subsystems, and processor architectures, fell by the wayside. The excitement for some is less that this is some fundamental architectural change in Windows NT. It isn't. It's that this is the first new thing in (desktop/laptop/server) Windows NT for a while that isn't "The customer can have any subsystem and processor architecture that he wants, as long as it is WinNN and Intel/AMD.".
It would be good to see the Interix subsystem come back, too. And maybe a second processor architecture, as well. (-:
I think that you're predicating an entire argument on what was probably a bit of slipshod writing. After all, we know (now) that what this is is Linux binaries running on the Windows NT kernel, and that Microsoft hasn't actually done anything to those binaries at all (and indeed touted that as a feature). So Microsoft hasn't taken any steps to make Ubuntu softwares specific to the new Windows NT Linux subsystem.
Nor has this even been positioned as an "Ubuntu on Windows server". Indeed, the Microsoft people promoting it have been stating (in all-capitals or boldface, no less) that it's for enhancing developer command-line workflows. It apparently doesn't even have the server capabilities (i.e. running programs as services) that even the old (Interix) Windows NT POSIX subsystem had.
You're also discounting the other Ubuntu news of the month.
See https://news.ycombinator.com/item?id=11415985 and https://news.ycombinator.com/item?id=11416376 .
I did actually give this some thought, for what it's worth. There are problems with "GNU" in the name and problems with "Ubuntu" in the name.
But it seems to me that Microsoft has a naming scheme that it is perhaps unaware of. See https://news.ycombinator.com/item?id=11417059 . (-:
Windows NT POSIX subsystem programs understand NT permissions, too. There's quite a lot about it in the Interix doco, explaining how ACEs are mapped and so forth. This is not a problem with the subsystem approach, demonstrably.
Of course, whether gdb would work is something that we haven't yet been shown.
Especially when they weren't doing so hot and the idea gets floated around (obviously without much serious thought, just something we say) "oh man, what if they just gave up on that part and used linux to make it all work" -- Then everyone goes "that'd be cool, but then all those apps wouldn't work..."
Hence why they chose this route which required some pretty fancy research on their part to implement. Which is even cooler imho. Probably the best senario for Windows/MS over all in the end.
It is widely claimed and believed that when he moved with his team to MS that he reimplemented Mica as the NT kernel. http://www.textfiles.com/bitsavers/pdf/dec/prism/mica/
Seeing her send SIGSTOP to a running MSWORD.EXE process and observe it stop updating its window in response to expose events was splendid. :-)