I find it amazing that you can have such a functional Ubuntu environment by translating system calls. Microsoft does have the advantage of Linux being open-source I suppose, while the Wine project had to reverse engineer DLLs. Or have you supply them on your own.
> Can open the Windows Start menu
> And type "bash" [enter]
> Which opens a cmd.exe console
Right... Bash is a shell, but your interaction with it is controlled by a terminal program. Unless there are some real changes to cmd.exe t̶h̶e̶n̶ ̶i̶t̶'̶s̶ ̶n̶o̶t̶ ̶t̶h̶a̶t̶ ̶i̶m̶p̶r̶e̶s̶s̶i̶v̶e̶ . You can compile a native bash and other utils now yourself, it's not that hard.
EDIT: It's more like a Linuxulator from BSD, which is certainly cool.
2. Extend can be good if done openly and collaboratively, as opposed to closed and hidden.
3. Extinguish - Here be dragons.
I'm hopeful the New Kinder Gentler Microsoft stays on steps 1. and 2. (done collaboratively).
Its ironic (old)microsoft exerted so much effort to put the personalities in place (OS/2, posix etc), then (mid)microsoft systematically destroyed that work under Balmer, and now (new)microsoft are reimplementing the same thing under a (seemingly) completely different system.
More correctly would be to say that Microsoft has advantage of user space libraries used in GNU/Linux distributions being open-source. Linux kernel itself being GPL2 is probably a problem for Microsoft's developers because of possibility to be accidentally exposed to it while researching documentation.
I didn't see any information about whether they do it using direct system call translation, or if they do it using shared library inter-positioning. In either case, its pretty cool!
This has been done before with other x86 OSes: FreeBSD has had 32-bit ABI compatibility for at least a decade (https://www.freebsd.org/doc/handbook/linuxemu.html), and the "lx branded zone" for Solaris has it as well (https://docs.oracle.com/cd/E19455-01/817-1592/gchhy/index.ht...).
Google Cache: https://webcache.googleusercontent.com/search?q=cache:http:/...
FreeBSD has been able to do this (with some limitations!) for years:
Nowadays, the most common use I've seen is as the Windows implementation of 'npm link' for Node.js developers.
If a fully-compatible terminal emulator doesn't exist yet (I have no idea) I bet there will be one within a year.
Edit: And cow fork only makes sense if there is memory over-commit. So to be fully featured it would need a separate memory subsystem with memory over-commit.
There actually have been [1], and I imagine MS has continued to flesh out those improvements.
While it seems like it is worth Microsoft's time to keep people on Windows.
I for one look forward to having all my vulnerabilities in one place... Linux, windows, server, desktop...
The author points to using grep and Xargs and some other tools to quickly update a package. That's the key here. These bash/Linux utilities are productivy boosters for all the Linux and Mac/bsd people out there. I can't imagine living without them and they're necessary for any system I develop on (which is currently a Mac).
- Wow, hell is really freezing over!
- The hardest part of running bash and other posix things under windows is filesystem access. Windows uses drive letters and backslashes, unix has a root filesystem with forward slashes. It seems they are taking the same route as cygwin by "mounting" windows drives in /mnt/c (or /cygdrive/c).
- If you just wanted bash and some posix tools, the harder but nicer way would be to patch them to understand windows paths. It is not clear to me that it is even possible, for example many tools assume a path that does not start with a slash is a relative path - while "C:\" is absolute. You would also want to make more windows apps understand forward slashes like "C:/Windows". To make things even more complicated, there are NT native paths "\Device\HarddiskVolume4\Users\Bill", UNC paths "\\Server\share", and the crazy syntax "\\?\C:\MyReallyLongPath\File.txt".
- I am really surprised this works in an appx container. From my little dabbling with modern apps in Visual Studio, I've found that they are incredibly sandboxed - no filesystem access unless you go through a file picker, no network connections to localhost (!?), no control of top-level windows, no loading of external DLLs. You can get around most restrictions for sideloaded apps, but not for windows store apps. That they can now package such a complex application as a modern app (with maybe only the linux subsystem DLLs delivered externally) means that they are slowly moving the modern/universal apps and traditional Win32 apps together with regards to their powers.
- Running a Linux kernel in windows, and then ELF executables on top (without virtualization) is nothing new, see CoLinux or andLinux. If I understand correctly, this new work uses a new Linux NT subsystem. It remains to be seen if this is better (more performant) or worse (if the Linux kernel is just another process, and it crashes, it doesn't take down the system).
- If they actually wrote a NT subsystem for Linux, this opens a whole can of GPL licensing worms, as you'll need to include internal NT headers. However, they say it is closed source, so I wonder how they did it.
- This really stands and falls with how well it is integrated in the rest of the system. I want to install tools in "Ubuntu" via apt and use them from cmd.exe, and vice versa. And long term, a X11/Wayland bridge would be nice too.
I wonder if I can use it via mintty instead of conhost.exe; at the very least, I could ssh into it from Cygwin.
As always, the devil is in the details; the rough edges where support peters out, or syscall inconsistencies creep in.
I wonder if this Unix filesystem layer will be able to break the Windows legacy path length limit. If so, the Linux version of Node.js will suddenly become much more useful than the Windows version.
> It remains to be seen if this is better (more performant) or worse
Sounds like performance, at least, will be better. TFA says "it's totally shit hot! The sysbench utility is showing nearly equivalent cpu, memory, and io performance." I'll reserve judgment until I see the fork() benchmarks. :)
Here's the famous Linus Tech Tips 7-in-1 video: https://www.youtube.com/watch?v=LXOaCkbt4lI
The reason WINE went with the library emulation route is because: (a) the Windows kernel doesn't have a stable system call layer, and (b) the Win32 API is massive anyway.
Windows has an easier time emulating Linux at the very lowest levels because Linux has an ABI stable system call layer. If you emulate those, you can run ANY Linux binary.
It also means Microsoft doesn't have to ship or support hundreds of Open Source projects. They ship the syscall layer, and distributions ship the user layer.
-It facilitates app cross comparability
-For Canonical, it reinforces the idea that Ubuntu == Linux, which is really good for their bottom line
-I wouldn't be surprised if Microsoft forked over a solid amount of cash
http://o.aolcdn.com/dims-global/dims3/GLOB/resize/1345x666/q...
It's been a while since I've used Windows but I'm pretty sure it doesn't have a unix directory structure... is it the case that they map / to a folder inside windows?
What would be really amazing would be if you could use CUDA and cuDNN from Ubuntu executables in Windows 10.
* within the probably surprisingly broad limits of the WinLS syscall emulation, though it wouldn't support niche OS config stuff, SELinux calls, etc.
Crossover is probably the best you'll get: https://www.codeweavers.com/products/crossover-linux
Otherwise, if you only need Windows to game, I'd highly suggested PCI Passthrough so that you can use your GPU in a Windows VM: https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVM...
Think of it as an Ubuntu chroot on Windows, with C:\ bind-mounted into the chroot.
I ordered a surface pro 4 and planning to run i3 in a VM with my existing nixOS configuration. I wonder how this news will affect my workflow.
I have emacs running as a systemd service and client emacs connecting to it, so it's quite a killer combo for development.
https://en.m.wikipedia.org/wiki/Windows_Services_for_UNIX
It blew my mind at the time, but it was kind of a pain and I didn't stick with it. I don't remember why.
I guess this is about same thing, only its Linux instead of some other sort of UNIX.
Can't find any substantial writings on this, but it seems likely.
Also, you can share directories between the native filesystem and the VM quite easily. And if you're using something like VirtualBox or VMWare there are "unity" windowing modes available.
> Note that this isn't about Linux Servers or Server workloads. This is a developer-focused release that removes a major barrier for developers who want or need to use Linux tools as part of their workflow.
It helps Canonical deploy Linux on the server in places that refuse to run Linux on the desktop, since Microsoft has said they're not interested in replacing Linux on the server with lxss on the server. This is absolutely good for certain subsets of the "Linux community" with certain motivations and ideologies. (And awful for others, of course.)
This might be the thing that saves Windows as a dev machine for me. I'm a heavy cmd/powershell user but I'd migrate to bash in an instant.
I was interested in that too, since I write Unix utilities and it is sometimes useful to have them work on Windows as well. I've done that (checked that it worked on Windows) with a few utilities in the past (written in C), that did not use any very Unix specific features that were not present on Windows. And remember reading around that time that Windows (I think it was from NT onwards) had a POSIX subsystem.
Then more recently, as in, a few months ago, I wanted to check that out again, and did. IIRC I read (maybe on a Wikipedia page) that the POSIX subsystem is not present in Windows any more.
What could be good though is MS backing the Wine project. But I guess it goes too much against their lock-in DNA.
This will have been quite a bit more work than just a POSIX layer, IMO; POSIX defines a bunch of low-level functions that lie just under the C runtime library, but Linux defines a bunch more that expect a certain view of the world - things like clone(2), which lets you selectively choose what the forked process gets to inherit. POSIX only specifies fork(2). Linux implements fork(2) in terms of clone(2). And POSIX defines the API at the level of C; this will have had to implement the Linux ABI, where it differs from Win64. A minor detail of a bunch of assembly stubs, but work nonetheless.
Having had OS X, Windows, and various Linux distributions as my primary operating systems I would consider having an Arch Linux VM kicking around if you want all the packages in the world, maintained fairly well.
Depending on what version ranges dependencies are locked to, you could still end up with a pathological situation which breaks the path length limit, but I've only had that happen once since switching to npm 3, and it was easy to resolve.
If they actually fix these imperfections, that would be fantastic. It would address a number issues that are "unfixable" in Cygwin flavors of utilities/apps.
And you can also buy $5 USB sound card and have no issues at all. Or spend few bucks more and connect PCI sound card.
Solaris doesn't have VM over-commit either, and few people would claim it's not a fully featured UNIX.
In pervious versions you could always use another console like Console2, but now it's built in (although the third party options still have more features than the new built in one).
For devs that do heavy Linux work (but have stuck with a Mac OS for GUI/app reasons), is it time to move (back) to Windows? If so, what would be a good laptop to get at the moment?
This is common in reverse engineering stuff.
I'm personally using a Dell XPS 13 (2014), soldered in RAM, but I did buy an ultraportable and 8GB is more than sufficient to do my all my personal work on. Work supplied me with a Lenovo W540 that is substantially more flexible, but it weighs as much as a couple bricks and I usually just leave it at home on the dock.
It's a Linux syscall translator for Windows. It works well enough to run a Debian userland, although it's got so many holes and rough edges that I would never, ever, ever suggest using it for anything other than a stunt.
It uses Interix to do most of the heavy lifting, so all LBW does is to translate from Linux syscalls to Interix syscalls; so we get a Unix filesystem and user permissions and sockets and fork etc for free. (Interix was great. I'm glad they're bringing it back from the dead.) Unfortunately not all the system calls directly map onto each other; so Interix has a native fork(), but Linux emulates with clone(). I couldn't make threads work.
A few of the biggest problems were:
- the Windows page size is 64kB; the Linux page size is 4kB. The ld.so loader will try to map two bits of executable within the same 64kB boundary, and, of course, this doesn't work on Windows. I crudely hack around it by allocating pages of RAM and copying things. Write-back mapping only works at all if the application lets mmap() pick the address.
- very very very different register usage. glibc on Linux uses gs as a 'pointer' to the current thread's private data area, via a special syscall. Windows resets gs to 0 on every interrupt! I crudely hack around this by intercepting null pointer dereferences, looking at the instruction to see if it was gs, and then reloading it with the right value.
- even then, that syscall sets gs to point at a GTD segment with a size of 2^32; this wraps round the entire address space, which allows very large offsets in gs to be treated as negative numbers. Windows doesn't let you create GTD segments. It only allows LTD segments, and it caps the segment limit to the end of the user address space, so this trick won't work. I crudely hack around this by intercepting segmentation violations, looking at the instruction to see it it's a [gs+negative number] dereference, and then binary patching the executable to use a different instruction.
- glibc is horrible and undocumented. There's a big pile of key-value strings pushed onto the stack above the environment when the process is initialised, containing various magic numbers. ld.so will just crash if you get this wrong. I spent a lot of time reverse engineering the ld.so source code to figure out what these were and how to set them up.
It was all vile and horrible, but it worked surprisingly well (i.e., it worked, which was surprising).
Using the NT kernel's personality system to implement Linux syscalls natively is totally the right thing to do; that's obviously what they're doing here.
I would love to know about the internal Microsoft politics which made releasing this possible. I wonder how long it's been brewing? I did LBW in about a month of evenings; the core logic wasn't hard. I wouldn't be at all surprised if this hasn't been floating about inside Microsoft for years.
For the last couple of years, as a windows user, I have just been installing Git Scm, which includes something similar to this and have been using that for all script / command line needs.
If this was baked into Windows so much the better! I would love it if nobody ever wrote a single CMD or PS1 file ever again. Let's please all just converge on bash and put this debate behind us.
"Windows NT was designed from the start to have modular subsystems. It was most infamously used to provide a POSIX subsystem which really only checked boxes on government acquisition forms. :-)"
1) Linux has won the server (web) market. Developers would like to use a Unix box to work on their server code so they typically move to OS X. This could prevent that switch because they can still use Windows to developer their Linux server software.
2) Many projects start out as Linux and stay Linux and are only ported after much time and effort to Windows. Enterprises when faced with a tool that they want to use will also look to switch off Windows. Now rather than the cost of switching they only have to pay to upgrade their windows boxes to use the tool.
3) There is now a major incentive for developers to only build Linux binaries because it will work more places. This might cause a faster drain of developers as they eventually remove all windows specific code and can more easily migrate elsewhere. This feels eerily similar to the OS2 story and no doubt in the next week I expect to see more than a few articles discussing this very thing.
4) It will be much easier for Microsoft to bring much loved Linux tools to Windows so you can expect to see a more rapid increase of tools announced that now work for Windows.
The problem is that the Win32 user mode system will fight you every step of the way: CSRSS will not understand what you just did, for example. It's not generally worth it.
1) Linux has won the server (web) market. Developers would like to use a Unix box to work on their server code so they typically move to OS X. This could prevent that switch because they can still use Windows to developer their Linux server software.
2) Many projects start out as Linux and stay Linux and are only ported after much time and effort to Windows. Enterprises when faced with a tool that they want to use will also look to switch off Windows. Now rather than the cost of switching they only have to pay to upgrade their windows boxes to use the tool.
3) There is now a major incentive for developers to only build Linux binaries because it will work more places. This might cause a faster drain of developers as they eventually remove all windows specific code and can more easily migrate elsewhere. This feels eerily similar to the OS2 story and no doubt in the next week I expect to see more than a few articles discussing this very thing.
4) It will be much easier for Microsoft to bring much loved Linux tools to Windows so you can expect to see a more rapid increase of tools announced that now work for Windows.
5) Games: What about the graphical layer? Can I write the majority of my game as a Linux binary and only have the rendering bit left over to separately implement for Linux/Windows? Will this spur an increase of cross platform games?
Are there any Windows only server applications that would be useful in a non windows shop? I'm not trying to be snarky, I really can't think of any.
I would expect the most likely users to convert would be those using job specific desktop software.
More cookies for running ubuntu userland on windows on wine on emscripten.
I tried them – they don’t work at all in KDE.
Arch linux host, KDE as DE, Windows 10 Guest, just leads to a big black box in unified mode, and windows don’t properly occur in the KDE taskbar.
What I expect is integration equal to WINE – automatically putting stored data into the correct folders, easily accessible and mounted, integrating windows with the taskbar, etc.
And yes, this is "just for games" – but try gaming on a multiscreen setup when the game doesn’t show up in your taskbar and you can’t minimize it.
Every year I try a switch to Linux desktop. This year I made it as far as trying to get multiple monitors working well. I also dabbed in gaming. In the end I went back to my work=Mac game=Windows duopoly.
If they did that, it would also fix the awfulness of Python's stdout on Windows!
One of the good things about using "newer" languages than C for building cross-platform utilities[0] is that things like that come baked in[1].
[0] - https://github.com/EricLagergren/go-coreutils [1] - https://golang.org/pkg/path/filepath/#IsAbs
Remember the holy edict of Linux kernel development: you don't break userspace.
MAX_PATH in Windows = 260
Path to Linux file system root from Windows user space:
"C:\Users\<user>\AppData\Local\Lxss\rootfs\" - about 45 chars
Max depth reachable from Windows user space = 260 - 45 = 215
Answers I've received before is "it can call the native API", but any program can call the native API. So, what's in a subsystem?
Well, it's not like Microsoft made much of an effort to remind people. It was discontinued as a separate project after about Windows XP. It continued to be part of Windows Server[1], but AFAIK, it was no longer present in the client versions.
[1] at least until 2008 or 2008 R2, I didn't check later versions.
(unless I am missing something)
But it would make sense that certain things should be doable on literally any computer. grep, find, vi, edit, etc... etc... I cant come up with a complete list - but it would be great to start that direction.
The whole python ecosystem is also better on Linux overall.
Anything strictly desktop-oriented is better on OSX of course.
Windows also has mountpoints. When I use a Windows system, I pretend that C: is the only drive, and mount external volumes under the root, Unix-style. I then install Cygwin in the C:\ root, creating the illusion of a fairly Unix-ish filesystem layout.
Apple leveraged a form of BSD -nix to get developer mindshare. Now Microsoft is leveraging FOSS Linux to leapfrog Apple. It is a smart move. While OS X isn't bad for -nix-like development, it still involves jumping through hoops and compromises. (Homebrew/Macports)
But it's alive and well (and awesome) in SmartOS, with active work going on to merge it into OmniOS, and eventually will be upstreamed to illumos-gate.
I would most likely switch back to Windows as my primary/only machine (because I also like to play video games sometimes) if I had the same kind of unix-like command line environment that I get in OS X.
Right now I basically need 2 computers at home to meet all my needs, but this would allow me to reduce it to one, so I could get a much better one (instead of the 2 mid-range ones I have now).
Having a large number of people developing on Ubuntu, may increase the demand for Ubuntu Server (with support where the real money is). I really only see an upside for Canonical.
0. https://blogs.msdn.microsoft.com/vcblog/2015/11/18/announcin...
1. https://blogs.msdn.microsoft.com/vcblog/2016/02/29/developin...
Well, I guess it'll be easier to port roguelikes which use ncurses.
[1] http://www.slideshare.net/bcantrill/illumos-lx
[2] http://us-east.manta.joyent.com/patrick.mooney/public/talks/...
Longer. I was using linux binary compat, in FreeBSD, to run the linux version of vmware workstation back in 2001.
It has always been very well done, and as you can see, not just running 'cat' or 'echo' linux binaries ... but full blown commercial software packages.
I had Windows XP running, as a dev environment, within the Linux version of vmware workstation, on my freebsd laptop. Worked great.
There was an oft-repeated claim in those days (the days of FreeBSD 4.x) that linux binary compat in FreeBSD would run linux binaries faster than linux would.
Personally I think they should change the terminal to unicode by default and make the 1% of pos-ported-from-windows-95 apps `chcp` before they work.
This looks to me like typical Microsoft strategy that they utilized a lot 25 years ago.
1. when not leader in given market, make your product fully compatible with competitor
2. start gaining momentum (e.g. why should I use Linux, when on Windows I can run both Linux and Windows applications)
3. once becoming leader break up compatibility
4. rinse and repeat
Happened with MS-DOS, Word, Excel, Internet Explorer, and others.
Ten years later, I'm totally bored with Apple and OS X. Especially after trying to use Xcode and the iOS Development Cloud Platform or whatever. So yeah, I'd probably switch to Microsoft Linux.
And I could totally play Counter-Strike and Diablo II while pausing from coding bash scripts? Omg!
Out of curiosity, why would you not just go with a Linux desktop for that?
"Microsoft finally add a temperature control knob to their toaster just like all the others!"
Comment: But it isn't blue, these corporate behemoths don't know what they are doing.
As a user and lover of all the ecosystems, the argument from the non-Microsoft crowd is beginning to appear quite desperate.
Is this not helpful?
https://wiki.documentfoundation.org/Development/BuildingOnWi...
LibreOffice has official Windows builds, so I would expect that they can already build the software on Windows.
*persistent: open command line, write something, close command line, open command line, arrow up\down shows last executed command.
And yes, I know persistent command line history can be set via power shell, but it needs to be set (i.e. not working out of the box) and it is not quite the same.
I know of no language runtime and ecosystem that has better cross-platform support than Node (hm ok maybe Java also). I develop solely on Windows (I just like it better) and virtually all of NPM just works on my box. Even stuff people never tested elsewhere than on their Macs, it just works. Express, webpack, mocha, phantomjs, it's really quite impressive if you ask me.
With WINE, everything works fine – but not with UPlay.
So I can run the game, via UPlay in the VM (but not unified mode), or I can pirate it and run it in WINE.
But unified mode, or paid in WINE, doesn’t work.
I always use Linux for desktops., I see no reason not to.
Question to my mind is, are they going to manage to limit this to development and advanced users, or will whole applications that target windows start depending on the linux emulation to work?
I would never go back to windows but would happily move back to linux if I had to
EDIT: I should note I didn't expect to like my Macbook air so much (fantastic machine)
Before you ask, yes I do have a Linux laptop (Acer C720P ChromeBook, unlocked with Debian + Gnome), and with all the sudden issues that pop out of seemingly nowhere (my latest dragon is a recurrent kernel module crash that can fill my disk up with .core files in 10 minutes), I've switched back to a 13" rMBP. Though, I have heard that 2016 is going to be the year of the Linux Desktop...
Decades of history, and their own self-interest.
>Fact is you have no idea
Sure I do. They've acted this way for decades, under various different management. To think they're suddenly going to become some nice, ethical company is sheer lunacy.
>and all signs point to a more open MS that has learned that simply throwing their weight around won't work anymore.
What signs? I haven't seen any. They've done stuff just like this before, and it always turned out badly.
The reasons were 1) it ran the unixy stuff I needed for university AND modern games at the same time, and 2) it was boring and conservative. In a time where Linux and Windows were changing and breaking (Windows 8, Gnome 3, ...) it was nice to use something stable, well-tested, and polished.
Apple needs to do a utility that supports this as smoothly as Bootcamp supports multiple boot.
It's a bit more than UNIXy, (the proper term is Unix-like), it literally is UNIX. It meets the UNIX 03 specifications.
Also, the motivations for these move predate the rise in popularity of Apple. For years, one of the biggest complaints about Windows was the lack of a good command line interface. There was the legacy CMD.EXE, which provided support for DOS commands and batch files, and PowerShell, which people either love or hate. The reality, however, is that overwhelmingly, the combination of bash/zsh and coreutils, binutils, util-linux, etc. won out a long time ago. Most schools use a flavor of Linux (maybe Solaris) for teaching Computer Science (and related disciplines), so many people who have formal training are used to those. Those people tend to teach other people to use them, etc.
Some people bemoan the fact that the CLI never evolved past its UNIX origins, but the reality is these tools work just fine. There's never been a reason to evolve them.
At very worst you could SSH to localhost and X11-forward to the same host -- but you can optimize that.
If they just implemented all Linux syscalls on top of NT, that would replace a Linux kernel. There would be no Linux kernel running on top. But in that case they would also need to emulate stuff like /proc. (This is the "reverse Wine" scenario.)
So I personally think it is either: - There is no new subsystem, this is just a linux.exe running the Linux kernel as a process, like CoLinux does or - There is a new subsystem. One way to do this is to port Linux to a new "architecture", namely the NT HAL. You'd call into the NT native API from the linux kernel, which would mean you'd have to put the headers with the native APIs you use under the GPL.
Armchair kernel development is fun :-)
They can also of course load DLLs, and there are APIs for moving your app Windows around (with some limitations).
That said, I do not know exactly what they're doing here, and it may be using some new capabilities being added to the system in the rs1 release.
Also filesystem ACLs are quite different on linux and windows, it'll be interesting to see exactly how the one maps to the other. What will chown and chmod do?
Does that mean I can do "apt-get install nginx" from their new "bash" terminal app? Does that then run under port 80 in Windows? Since no VM is involved.
I'm still a bit confused.
- What about lack of all the Linux/OS X GUI software?
- What about lack of all the UNIX OS features?
- What about all those billions and billions of Windows malware, viruses, adware etc.
- What about all the spying and restrictions that Microsoft has integrated into the Windows? (e.g. cannot block Microsoft spy server in the hosts-file, forced updates etc.)
- What about the fact that OS X and Linux have always been at least decent from developers point of view but Windows has always had problems and then things like Vista and Win8 happen.
- What about the advertisements served to you in the login screen?
- What about all the future shit MS will throw at you?
- Other stuff can't remember now
If and IF this will actually work out well, I would say this finally makes Windows usable for software development however I don't see any reason why anyone would change from UNIX based system to Windows unless they plan to make even bigger changes in the future...( like rewriting whole Windows to be UNIX based for example. :) )
I think this is a great move by Microsoft to be able to EXEC a competitor's binary files natively. But, I think it risks being an admission that Win32/64 syscalls and Windows file system semantics are a crufty boat anchor holding developers back. To admit that risks inviting more developers to bail on Windows.
That said, F-it. If updating to Server 2016 breaks it for good, we might actually prioritize a rewrite within the next two decades.
The kernel is an important part of the system, sure, but only one among many important parts. We therefore think that, to give full credit to the authors, the whole system should be termed GNU/Windows.
I just got tired of fixing sound issues, trying to make a scanner work or investigating CPU states to fix heat and battery draining issues on yet another laptop. Ultimately, I think, all of this is a result of the unresolved issue of who should write and test device drivers.
It doesn't help that I disagree profoundly with the prevailing package management philosophy of Linux distributions, but that is a comparably superficial problem that can be worked around.
As a side note, Windows NT had a POSIX layer as one of its three main APIs (Along with Win32 and OS/2), so in theory, at least, it should have been easy to port true UNIX apps to it. I have no idea what state the POSIX layer is in now; probably in a similar state to the OS/2 layer.
I'd rather use a toolchain I know better. In fact, I'd love to use clang on Windows.
If I were buying a Windows machine, the only one I would consider is the Microsoft Surface Book.
If MS's marketing department has half a brain at all, they will accidentally leak such a memo fairly soon.
> What about lack of all the Linux/OS X GUI software?
Windows prolly has more GUI applications than both those OSes combined. That's not necessarily a good thing but it's not bad either. It just means there is a Win substitute for everything.
> What about lack of all the UNIX OS features?
Same answer as above.
> What about all those billions and billions of Windows malware, viruses, adware etc.
I download a lot of crap on my home Win computer and haven't had a virus once in the past 6 or 7 years. There are likely more Android viruses active now than Windows.
> What about all the spying and restrictions that Microsoft has integrated into the Windows?
If you don't give permission the action is not taken. Granted I am currently getting spammed to update my home computer from win 7 to 10 but it hasn't force installed on me. Likewise for automatic updates.
> What about the fact that OS X and Linux have always been at least decent from developers point of view but Windows has always had problems and then things like Vista and Win8 happen.
Which is what this new initiative is trying to fix.
Don't get me wrong. I love my osx for dev and my *nix boxes for servers. But if I can get one machine/OS for desktop development of nix and windows without having to run silly emulators or switch between VMs then I'm sold.
$ grep DESC /etc/lsb-release
DISTRIB_DESCRIPTION="Ubuntu 14.04.3 LTS"
$ uname -a
Linux kappa 3.13.0 BrandZ virtual linux x86_64 x86_64 x86_64 GNU/Linux
$ wc -l /proc/cpuinfo
608 /proc/cpuinfo
$ /native/usr/bin/uname -a
SunOS kappa 5.11 joyent_20160204T173314Z i86pc i386 i86pc
And this seems to be implemented as an NT subsystem, which makes tons of sense.
% uname -srm FreeBSD 10.3-RELEASE amd64 % /compat/linux/bin/bash bash-4.1$ /bin/uname -srm Linux 2.6.32 i686 bash-4.1$ /bin/uname -a Linux viserion 2.6.32 FreeBSD 10.3-RELEASE #0 4b75b72(releng/10.3): Fri Mar 25 19:14:5 i686 i686 i386 GNU/Linux bash-4.1$ cat /proc/cpuinfo | grep 'model name' | head -1 model name : Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz
(My own answer is that cheap VPSes are 150ms away from me, and with Virtualbox I had a few problems, always related to Windows file permissions yada yada yada...)
I don't want or need a Linux ABI, I just want to run a Linux container on Windows (if I have to use windows, which I would prefer not to do).
If node running on windows needs access to a database running on the Linux layer then what happens when things aren't working? They can't possibly make all programs interact in a seamless way.
I'm imagining frankenstein programs that are hacked together with code that only works on this frankenstein OS.
I switched to Mac for my personal development in 2001 but still used Windows at work. I have found over the last couple years that I have been migrating back to Windows for quite a few things. For me personally, I find the UI in Windows to be more productive and faster. The features Apple has been adding are not things I'm very interested in and I haven't been using my 2009 MBP for much anymore except syncing with my iPhone. A number of Linux VM's are always around for development work and if I can do it all now in Windows, I'm all in.
I've been holding off buying a new laptop and, if this new feature works as advertised I will not be buying Apple.
Haven't had malware in years. Vista and Windows 8? Advertisements? Future shit and other awfulness you can't remember? Yeah those really sound like valid points.
They don't run the linux kernel at all. That /proc/cpuinfo is a part of a limited subset from what procfs usually offers. They enter their linux compatible "subsystem" via a bash binary, which could probably be any linux elf binary, since they run them unmodified. So, it looks like they actually implemented most of the syscalls with some of them emulating necessary parts of linux environment, like procfs.
[1] https://sec.ch9.ms/ch9/5db6/8ee786b7-9fc5-45bf-94d0-16ea9176...
Why not just use Linux? It has a GUI. It has apps. It does everything a modern desktop or laptop needs to do. It really is great.
Gave this a long look and my main beef is that I couldn't possibly do anything on a Windows Machine in its' current state. Linux isn't just about running apps - there's a philosophy behind the system. Users first!
As long as Microsoft continues to disrespect the rights of users in regard to privacy, data-collection, data-sharing with unnamed sources, tracking, uncontrollable OS operations (updates, etc) - I will never go near it.
I find it especially offensive that ex-open source and ex-Linux users (working for Microsoft) have the audacity to come on here and try to sell this as a 'Linux on Windows' system when most of what makes Linux special (respect for the user) has been stripped away.
It's like giving a man who is dying of thirst sea water.
Most comments here appear to be positive and that's fine... whatever. To anyone reading this... please don't sell your souls and the future of software technology for ease of use and abusive business practices. /rant
As for which OS (Win/Mac/nix) controls the majority share of developer desktops, I feel like it's always going to depend on what you're developing, so talking about the overall "biggest slice of the pie" for developers is less meaningful than talking about who has the biggest slice in the consumer space.
For example, a backend web developer might look at this "Winbuntu" thing and suddenly be attracted to the idea that they could trade their Mac in for a PC that lets them do all the UNIXy stuff they need for their job, but at the end of the day lets them play the latest PC games...
...unless SteamOS continues to grow in popularity, in which case Microsoft loses share because a Linux-based laptop suddenly seems like the best choice for a gamer-developer.
On the other hand, if we're talking about a company handing work laptops out to employees, frontend developer-designers are likely to continue preferring (requiring, really) Macs for a long time to come, and that likely means that it makes more sense to keep a common platform and hand Macs out to everyone, since so many server devs are already well-accustomed to using Macs. And though Windows might eventually become attractive enough to professional designers, Linux is deeply neglected in the design-oriented space.
But that's all just web development, which has much more fluidity than other types of development. Game developers will continue to develop on the platforms that they intend to support (or Windows for consoles, at least for the time being). iOS developers will continue to develop on Macs. Mac developers will develop on Macs, Windows developers will develop on Windows, and Linux developers will develop on Linux. I'm barely an Android developer, but it seems to be slightly more natural to work on a Mac or Linux machine, and yet "Winbuntu" would likely remove that advantage.
I agree that with Windows embracing Linux so deeply like this, it certainly opens the door for a lot of people to make the switch-- personally, I bought a Surface Book because I was excited by the hardware, but quickly returned it once I realized how unhappy I was without native access to a terminal. If Ubuntu continues to flourish as a fully-fledged aspect of Windows, I might consider buying the Surface Book 2.
But my personal anecdote also illustrates the greater point-- this opens the door, but it doesn't push anyone through it. I was tempted away from Apple because they've stopped innovating on their laptops. In order for developers to switch to Windows, they'll have to be tempted for their own reasons. And old habits do die hard.
Some perspective is needed here. Windows has more GUI applications than both those OSes combined and multiplied by some large number. A windows PC can run every Windows application made in the last twenty years, with some exceptions, and it's an infinitely larger market for commercial software.
The example commands from the article are all available with the git distribution:
> cp -a
> find | xargs | rename
> grep | xargs | sed
You can do all that - plus ssh (with ssh-agent) from the DOS prompt (you don't need PS, PuTTY or git bash).
There's vim too that comes with syntax highlighting, for which there's solarized dark/light colour palettes for the DOS prompt [1], as well as decent enough consolas fonts that you can use.
You can do an `ls *.exe` in the C:\Program Files\Git\usr\bin directory to see the list of programs that are there.
Now that Windows 10 has done some 20 year late improvements to the underlying console window [2], you can properly resize the window and the text flows properly.
The only thing I miss are the `history` and `!` commands for which I wrote a hacky bat file implementation of [3].
Edit: Clink [4] appears to be a fully compatible GNU history (Readline) implementation.
Chocolatey is pretty awesome too.
[1]: https://github.com/neilpa/cmd-colors-solarized
[2]: https://technet.microsoft.com/en-us/library/mt427362.aspx
[3]: https://ianchanning.wordpress.com/2014/10/29/dos-command-his...
It's a lot easier to emulate syscalls than it is to do something like CoLinux. Additionally, I can't imagine Microsoft would EVER let GPL'd Linux code into their kernel.
A GNU userland. A plethora of tiling window managers. A selection of clean terminals. Every single thing Debian's or Arch's repos offer which one must turn to brew for.
And of course there's freedom too, which is nice.
> Every year I try a switch to Linux desktop. This year I made it as far as trying to get multiple monitors working well.
I've got multiple monitors running on Linux, over HDMI, at home & at work, at differing resolutions & orientations. I use arandr-configured xrandr scripts which set my desired orientation with a quick keystroke in my window manager. What more does one need?
- Drag-to-resize
- Automatic text reflowing on resize
- Ctrl + X/C/V support for cut/copy/paste
- Selection that does line-wrapping
- Transparency
- A full-screen mode
[1]: http://futurice.com/blog/a-saner-windows-command-line-part-1
Nothing beats running and developing on localhost.
> - What about lack of all the Linux/OS X GUI software?
What about the lack of windows software on those platforms? It goes both ways.
> - What about lack of all the UNIX OS features?
Which features? What about the Windows OS features you do't get on a UNIX OS? Again, it goes both ways.
> - What about all those billions and billions of Windows malware, viruses, adware etc.
There are plenty of windows Viruses and malware, but I will say the most problematic security problems I've had have all been on Linux boxes. I would still count Windows as more problematic overall due to the quantity, but I believe the focus on security from Microsoft in the recent years has paid off, and it's nowhere bad as it used to be. Also, to some degree, the prevalence of malware and viruses are because of the popularity, and the popularity comes with it's own advantages (more supported software). It's a trade-off using a platform where some software you like may not be available (e.g. games).
> - What about the fact that OS X and Linux have always been at least decent from developers point of view but Windows has always had problems and then things like Vista and Win8 happen.
Am I supposed to know what this means? People have been using Windows as a development platform for a long time. Those that want to use Visual Studio still do. Windows Vista was crap, but I didn't find Windows 8 bad at all. Around Windows 7 is when it started actually being viable for me to run, and I think it's gotten consistently better over time. The biggest problem I know of that people had with Windows 8 is the start menu change, which to be honest is a really small thing, people just didn't like it and it was front and center.
- What about the advertisements served to you in the login screen?
I haven't seen any.
> - What about all the future shit MS will throw at you?
I'm not sure how this puts Windows in any different light than OS X.
> - Other stuff can't remember now
Seriously?
> - What about all the spying and restrictions that Microsoft has integrated into the Windows? (e.g. cannot block Microsoft spy server in the hosts-file, forced updates etc.)
This is valid, and would be my number one reason for not running Windows at this point if other considerations didn't outweigh it for me.
And as you said, there is nothing new in this. The whole "hell freezes over" thing gets a bit old because Microsoft has done this same routine countless times before. When they are the underdog, seeing a fleeing userbase, etc, they pragmatically veer towards open and integrated. When they aren't, they close off and exploit. (see Microsoft's arrogance and hubris as they exalted in their success with the Xbox 360 -- early initiatives like XNA, their unloved community gaming thing...abandoned and left to die -- and now that they're losing with the Xbox One, once again that wonderfully open and accommodating company returns again. People pretend it's new.
Another example I would give is MSN Messenger -- Microsoft did a loud, public campaign, including taking out ads in newspapers, pushing an open messaging platform, interoperations, etc. Microsoft had just started to get into the messenger game, so of course they didn't want to be kept out via the network effect.
Then, of course, MSN gained users (being pushed on users, automatically configured, tends to do that). Microsoft made a complete 180 in approach. Soon they incorporated an expensive licensing program that third party apps had to use to interoperate with MSN Messenger, endlessly doing technical fixes to block third party access.
What happened to that gregarious, open and cooperative Microsoft that was taking out ads to implore AOL for blocking access? The situation changed, and suddenly it wasn't in their interest anymore.
With the availability of Unix command line tools (and maybe even GUI apps) on Windows, this becomes a very viable platform for me as a developer. I'd still be throwing away most of the software I purchased for OS X, but at least Windows equivalents are available.
If Microsoft is indeed courting programmers again, this is a smart move.
Are Ubuntu's libraries available. Could you theoretically produce a statically linked binary under Windows and then run it natively in 'real' linux
I also bet many people would disagree with your statement that W10 is unusable and worthless without this feature.
The Windows kernel always existed and has evolved over the years: NT, 2000, XP, Vista, 7, 8, 10. They didn't develop it as the "last missing piece of the puzzle".
Unfortunately I still think this is going to be kind of a shitshow as long as they keep the black box nature of conhost. The terminal apps OSX and most linux distros ship with are fairly bare-bones too and I doubt reprogrammable 256-color mode is on microsoft's roadmap.
If they're smart, they'll allow a Linux ABI exe to sit on one end of a pty, and Win32/UWP exe on the other, so people can use a native Windows terminal emulator instead of needing Xming or similar.
I hate it. I bet Stallman, Ken Thomson, the GNU Project, Bell Labs and many others are crying and laughing at the same time.
We live in a world built upon the previous one, as the previous was already was. Some things we forget, others become lore, folklore, myths, and others are lost... It is similar to Facebook providing Internet. Or Dropbox providing 'rsync'. Maybe one day the common man will rediscover plaintext and the command line interface. And the then hipsters will use it.
Some links:
- find http://doc.cat-v.org/unix/find-history
- grep https://medium.com/@rualthanzauva/grep-was-a-private-command...
- cp https://news.ycombinator.com/item?id=8305283
- wget https://en.wikipedia.org/wiki/Wget#cite_note-13
- AWK was created at Bell Labs in the 1970s, and its name is derived from the family names of its authors – Alfred Aho, Peter Weinberger, and Brian Kernighan. ~ https://en.wikipedia.org/wiki/AWK
- http://www.cryptonomicon.com/beginning.html
BONUS:
Let's not call it Embrace, Extend, Extinguish until we see the Extend & Extinguish. Microsoft is a very different company than it used to be.
If the Ubuntu on Windows personality/subsystem gave Linux processes full permissions to /mnt/c, I'd expect that would be a security vulnerability, and I'd assume they are doing some mapping. Unfortunately, in the video they posted I never see them do `ls -l` on `/mnt/c`, only a subdirectory they created.
The Native API is mostly implemented by the kernel itself, ntoskrnl.exe, which has a system call table (KiServiceTable). Most of the native API is exposed to userspace via ntdll.dll, which calls through that system call table. This API is available from very early boot.
The Win32 subsystem is mostly exposed by win32k.sys, which is a kernel mode driver. It has a second system call table (W32pServiceTable) which is then consumed by user32.dll and friends. Some parts of the Win32 user-mode API are implemented by calling the Native API via ntdll.dll, though.
Processes which exist in the Win32 subsystem have additional kernel data associated with them (the private state of win32k.sys for that process), had additional win32k.sys code ran during initialization, and can therefore make use of the services of that driver.
If you create a Native process with RtlCreateUserProcess, it won't be able to make Win32 API calls. But it can run in environments where a Win32 process can't -- for example, autochk.exe, which is the version of chkdsk which runs as part of boot before the graphics driver has loaded, is a Native executable.
There's also support in the PE executable format to indicate which subsystem to use, so the loader knows which type of process to create. Obviously, this won't be relevant for running Linux ELF executables.
The layering isn't perfect, especially since while NT was designed to support multiple subsystems (OS/2, Win32, Posix), Win32 is the only one which Microsoft have historically focused on.
I believe at this point microsoft does not have the critical mass to create an app-store. There is not enough momentum. Only Linux, Android and Apple have successful appstores. If microsoft allows debian package manager, it can bring back a lot of developers back to windows. Still there are many companies that still use windows desktops and laptops.
There is potential to evolve this into a microsoft app-store that is compatible with ubuntu. That would be a win-win for microsoft and may potentially boost sales and slow down the decline of their OS.
1. "A multi-billion dollar corporation with almost an omni presence on the desktop computer was seeing it was missing something that was readily available with other competing solutions, and decided to nitpick the greatest parts of those competitors and incorporate it in their own solution. If you can't beat 'em join 'em strategy."
2."Computing and software is becoming more fluid, and an OS doesn't have clear boundaries as it used to have but it is just made up of whatever the best ideas are, ideas that are expressed by software and thus interconnect via API's and can be glued together to make the best possible solution to fit your needs. And that big multi-billion dollar company wants the best solution."
(If you like that last idea, I have news for you. This is exactly the idea behind GNU/Linux, and it already exists. So you know, instead of being excited and giddy for all this you could just install Ubuntu.)
And this is where the sticky part is. The openness of GPL software makes way for their current Microsoft approach, but it also bites itself in the butt. By using bash in such a way and incorporating this Free and Open software within this closed OS is suffocating the ideas of GPL and diluting its purpose. Because Microsoft is not GPL-ing its own code base (sure, small parts), it's just using the best parts and sticking to its own strategy. It's a smart, clever, and tactically strong move and it will be very successful. But GPL thrives and exists because of other GPL software, and this approach works and creates beautiful things (take how GCC made may for Linux, which made way for a gazillion other tools etc.). There is a viral aspect to the GPL that will be completely cut off. I believe GPL software in a non-GPL environment have a harder time in reaching their potential.
But most people will just look at it from a user perspective and think: 'hey, I get best of both worlds'. But they forget that this software only got this far because of exactly this license. It's a fundamental and integral part of it's success.
Since what's happening was completely unheard throughout the 90's and 00's I don't think both sides would have guessed this would ever happen. But it did. Just like a couple of weeks ago Microsoft decided to create its own Linux version to support SQL server on Linux. So even stranger things might happen and I'm trying to keep an open mind.That being said I really think the nature of software is and should be fluid and so it should be able to be stitched together to create what you need. But this only work really well, if all parts of the 'quilt' follow the same rules.
Plus, Windows is still the leader in the desktop and laptop market until today. They are facing some tough competition but "not leader in given market" is not true.
It's not the best desktop environment out there, but apart from desktop, linux is the most popular system at every step from small embedded systems to the most powerful supercomputers. That wouldn't be the case if it couldn't make hardware available in a reliable and efficient manner.
If they also add Linux userland support to Windows Server, in the near future you will be able to SSH in and get bash prompt on your Windows infrastructure... natively.
I'm kind of liking the future.
I dual boot my laptop b/w Windows and Linux because the WiFi network at my school has issues with Linux...so I'm forced to use Windows, also for games, but it's such a pain. Today it updated forcefully while I was trying to study; I tried to postpone the update but the option was grayed out. The Windows philosophy through and through is to treat users as ignorant and incompetent idiots for whom even the most basic of tasks must be performed, and who cannot make important decisions. This, IMHO, is the epitome of disrespect, and the reason I look forward with great anticipation to the day when I am able to solely operate within computing environments that afford me the same dignity as the cars I drive.
I mean, sure, but (unless things have changed dramatically in the many years since I was a Windows dev) you don't have to do anything more than install VS. The compilers and nmake can be used without opening the GUI. IIRC, you can feed a VS project file (or -I'm pretty sure- a solution file) to nmake and get the same result you'd get from loading the GUI and pressing build.
Better Desktop environment Better Package Management More Up to date packages
Some of my knowledge of OS X is likely outdated I haven't used it since 2011
I started using Linux when I was at University late 90's early 00's (Mandrake was my first distro). I switched to OS X around 2005 and used it as my primary operating system for about 5 years.
I used OS X because I'd purchased a MacBook Pro (mostly for the hardware) I still think Macbooks are the nicest laptops I've used to this day I went through 3 iterations of Macbooks before I stopped using OS X. I used OS X because it was good enough but I never fell in love with it.
I absolutely hated the desktop environment, silly things like no ability to customise anything, lack of workspaces, having to hit command q to kill application (because the 'x' button wouldn't close them properly) stuff like that. Workspaces came in a later OS X update which addressed some of my gripes.
It was never easy to install third party packages and libraries in OS X. I think this has improved now, when I used OS X it it was a mess (especially compared to the ease of something like apt). The native system packages were always really ancient - old version of GCC, old version of emacs, python etc. Trying to install newer version of these 'default' packages was not straightforward at all I remember having huge issues getting python 3 working.
Nowadays I run Fedora very happy with it. Not compelled at all to switch back. Linux support for modern laptops is a lot better than it was when I first started using Macs.
My current job is in an 'enterprisey' environment I'm forced to use a locked down version of windows here. Almost anything would be better.
Is WOW in the cpu identifiers "windows-on-windows", the shim they use for "xp mode"?
If everything's mounted under /mnt(/c..), and the screenshot shows nothing mounted there - Can this run just like a VM without the host fs mounted?
I'd be really curious to see; if linux attempts to access raw block devices in /dev/, what's actually there. in the process list in windows, are all linux processes enumerated.
For now it just looks like a linux VM with the guest fs mounted in the host, and the host fs mounted in the guest.
https://www.linkedin.com/pulse/why-you-should-help-me-create...
Any chance you're planning for a "desktop" version of SmartOS?
On the contrary; there have been many valid reasons to evolve them, but backward compatibility was deemed more important.
Example #1: it is possible to write a sh/csh/bash/?sh script that handles file names with spaces, slashes, quotes, question marks, etc, but one would hope that would be made a bit easier, almost half a century later.
Example #2: the hack that is xargs for handling large numbers of arguments. To write a truly robust script that handles directories with an arbitrary number of files, one should run a pipeline using find and xargs, instructing xargs to do the actual work (and you cannot even use find and xargs with their default settings; you need -print0 and -0 flags to handle file names with spaces, etc)
If programs received arguments unexpanded, and the system had a library for expanding arguments, many use cases would become a lot simpler, and scripts could become more robust.
And yes, that could have been evolved. Headers of executables could easily contain a bit indicating "I'll handle wild-card expansion myself".
Example #3: man pages, IMO, should be stored in a special section inside binaries. That ensures that the man page you read is the man page for the executable you have.
Example #4: http://unix.stackexchange.com/questions/24182/how-to-get-the... shows that things _have_ evolved. Reading and parsing /etc/mtab isn't a reliable way to find all mount points, just as reading /etc/passwd file isn't the way to find password hashes anymore, ar has long been upgraded to support file names longer than 14 characters, and zip knows more file attributes than it used to.
The only place that / as a directory separator doesn't work is when interpreted by cmd, as it's then ambiguous with DOS-style switches. And even then, modern cmd tries to understand / properly when possible. Most of the time it works as long as it is not the first character of a path (which you would not frequently see in Windows anyway), as there's then no way to tell it apart from a switch with a long name.
The canonical representation of paths in Windows uses a backslash, but 95% of the time the slashes are interchangeable.
So far they get better in communication with open source community, but their business practices are all the same and Windows 10 only prove it.
App-stores require critical mass of mainstream users which Windows still has more of beyond anyone else. Developers follow the masses, not the other way around.
After all, there's not that much ground-breaking "news" in this story.
About the technical part:
1. I wish they wouldn't distribute it in cooperation with Canonical, a company with a reputation that is rapidly decreasing for very good reasons. 2. I, for one, would surely have preferred the `ksh` which came with earlier "Unix for Windows" packages, IIRC it was the "MKS ksh", but I guess they had a reason. 3. I'm afraid of what this means for the PowerShell.
Many computer users run a modified version of the NT system every day, without realizing it. Through a peculiar turn of events, the version of NT which is widely used today is often called “GNU/Windows”, and many of its users are not aware that it is basically the NT system, developed by Microsoft's NT team.
There really is a GNU/Windows, and these people are using it, but it is just a part of the system they use. GNU/Windows is the userspace: programs that you run as the user. The kernel is an essential part of an operating system, but useless by itself; it can only function in the context of a complete operating system. GNU/Windows is normally used in combination with the NT kernel: the whole system is basically GNU with Windows added with NT added, or GNU/Windows/NT. All the so-called “Linux” distributions are really distributions of GNU/Windows/NT.
[1]: http://www.hanselman.com/blog/MakingABetterSomewhatPrettierB...
Although, in my opinion, I wouldn't trust any laptop hardware company out there enough to use the system that came pre-installed with it, but that's for another discussion.
I am sure there is a firestorm of opinions in the hundreds of comments below this textbox but frankly I just want to say that this was the best fucking news I have had all year.
Can you imagine the day you'd be able to link a library built in Linux with a Windows native app?
i miss absolutely zero windows software.
By doing this, Microsoft are legitimising bash as an OS feature for "developers who like unix". This has two effects on Apple:
1. It means they will be less likely to remove their bash CLI from a future OSX / iOS hybrid workstation OS nightmare that we all don't like to think about.
2. They will have to compete with Microsoft for customers who are "developers who like unix". Competition is good for us, the users.
With a real pty, you'd be able to use any Windows console program with mintty, sshd, Emacs term-mode, or whatever else you wanted, transparently. I regret not having a chance to finish that work.
So many tools require compiling/building in Cygwin and it'll be quite convenient to just be able to do that without the extra layer.
As for Ubuntu's libraries - Aptitude is there and you can apt-get quite a bit. I believe you could -- in some cases -- static link and run in "real" linux. They were pretty insistent that this is "real Ubuntu" -- basically they've done the reverse of Wine, mapping Linux calls to Win API equivalents, so that might open up the scenario you're thinking about.
After all you can only auto update microsoft and store apps. Other apps will either handle updates themselves probably with an annoying UAC prompt and possibly at inconvenient times when you actually want to use the apps. Some have processes that constantly sit in the background sucking up your resources to pop up annoying prompts to update application foo during which you must watch for them changing your browser preferences and installing adware. Others you will simply have to go to their website and download an exe or msi.
Meanwhile you are missing the fact that people don't want to avoid automatic updates to fix security holes. They want to avoid updating to the next undesirable update foisted on the users before its ready and much to peoples annoyance. Example the windows 8 UI change.
Unbelievably staying on an older still supported platform until you are ready to update is a feature you have to pay money for!
Lest you misunderstand I'm not talking about clinging to windows xp till they claw it from your cold dead hands 3 years after end of life I'm talking about the future equivalent of staying with windows 7 and upgrading to windows 10 because 8 sucks.
Personally I think someone interested in graphics/cad/audio production might find something compelling even if alternatives exist on linux for the above. I don't see much in the way of gui softare that anyone would care for as a developer. You can bring up visual studio if you like but I don't find it compelling.
https://sourceforge.net/projects/line/
("Line is not an emulator")
Hard to tell from the article, but I'd guess they just made an MS version of what this project does. The "mapping syscalls" sounds like implementing the ABI, similar to how Line did it.
[1] https://blogs.windows.com/buildingapps/2016/03/30/run-bash-o...
> Plus, Windows is still the leader in the desktop and laptop market until today. They are facing some tough competition but "not leader in given market" is not true.
Their position in the server world is not nearly so secure, which is quite possibly what this move is meant to address.
https://en.wikipedia.org/wiki/Embrace,_extend_and_extinguish
The thing is, from an "open source" perspective, what they're doing is great and totally legit. From a free software perspective, it has a lot of potential to be suspicious and troubling. If all you care about is "the best technology; yay" rather than user freedoms, your concerns are moot.
https://github.com/nodejs/node/blob/master/src/node_os.cc#L5...
Microsoft probably hard-codes a response of "Linux" (or whatever would be normal for Ubuntu) for that call to prevent Ubuntu binaries from freaking out.
There was also the Lenovo Solution Center problem. And the Lenovo BIOS shenanigans (or was it the Lenovo Service Engine?).
There have just been too many lapses in judgement at Lenovo. Either their competence is slipping or their ethics are. Either way, I'm done with them for now.
Maybe the real reason that MS is moving super fast towards open source nowadays is to damage the open source goodies from within after they failed to attack it heads on many years ago? I just don't trust MS and it's hard to change that after so many years' hostile move from its side.
I admit that gives me a bit of encouragement since I'm shackled to Windows at work, where I really try to make the most of it, but I'm by no means impressed given I already have done the following to get many native GNU/Linux binaries (as one example):
PS > Install-Package gow -Source Chocolatey
Another example: cygwin
The real limiting factor has been muxing terminals while on Windows. And judging by the post, that certainly remains a problem.
If they could successfully bring a full Bash terminal to Windows 10, I would actually be somewhat impressed.
Yeah, same feeling. Given all the lawsuits to monitize patents against Linux, I can't help but think this is some end run to accidentally get people to slip Windows APIs into OpenSource, that would then end up in Linux, and then those lawsuits end up with more teeth.
2. GNU Parallel is actually my choice for this, because xargs has a few known issues. Parallel gets the most right, and I used it a lot in my research to run scripts on lots of data.
3. Hell no. Distributions do this job perfectly fine, and you should be able to read the man pages without having read permissions for the binary (since in UNIX you can have 111 as permissions on a binary).
Is there any real benefit to me switching over to this approach aside from what I'm guessing is better performance since it is native and not a VM?
> If you don't give permission the action is not taken. Granted I am currently getting spammed to update my home computer from win 7 to 10 but it hasn't force installed on me. Likewise for automatic updates.
Except for the actions that the operating system doesn't tell you about, and you can't be sure about becuase it's proprietary (Windows has at least 3 backdoors and spy features that we know of, and none of them ask for permission). And all of the DRM and related malicious functionality that stops you from doing things you'd obviously want to do with your computer.
It amazes that people don't understand that one's usage patterns might be different, so "just use Ubuntu" wouldn't be applicable (e.g. UNIX savvy guy who appreciates the shells and userland, but nevertheless wants to do Windows .NET development, or work with native and proprietary Windows programs the rest of the time).
For my laptops... always had linux laptop problems. Sometimes have gotten really close to all the way working.. but then maybe I will find out my battery drains in 2 hours because some power saving feature is broke in my kernel and I cannot be arsed to go fix it, rather just buy a macbook.
Now for this vs macbook.. it depends how next macbook line looks. I need a 32GB ram laptop. If Apple skips that ship again, this will start to look pretty dang appealing.
No, it actually won't. There are so many tricks and caveats with shell expansions, wildcard handling etc, that regularly old unix hats I know get it wrong (and I've started on Sun OS, probably before half of HN was born).
This is like saying "pointers are nothing, you can learn them in a day" ignoring the obvious fact that the interplay with pointers in a large app is something entirely different than merely understanding indirection, and than even the best kernel/driver/crypto/etc programmers still get pointer related bugs after decades of writing C.
>3. Hell no. Distributions do this job perfectly fine, and you should be able to read the man pages without having read permissions for the binary (since in UNIX you can have 111 as permissions on a binary).
Actually, no, they don't do it at all. One can have 3-4 different versions of a userland program, and never know (unless they explicitly check versions) for which the man page is.
Never made sense to me. It was driven by IT because of control issues. I introduced Linux (this was 10 years ago) and then slowly ever Dev switched to develop on Linux because it was a better development experience. Unix is by hackers for hackers. IT was forced to incorporate these systems, which wasn't hard.
Now with this change I can see why people might switch back, definitely makes it easier to have Windows IT shop, but still be able to target Linux. Personally, Docker has already started resolving this issue for me, but I can see it helping Windows devotees. MS lost my trust back in 1996, and I honestly don't know what they could do to regain it, but this isn't enough for me.
This seems like a prime example of the "Anything I am good at using is objectively easy to use" fallacy that's common to programmers.
SUA is based on an old version of BSD, not GNU. tcsh, csh and sh. The compiler works. There is an old version of lex. For some of the userland, Windows binaries are provided, such as vi. It's better than nothing.
Why did MS remove SUA from Windows 10? What harm would it do to remain an optional add-in as it was in Windows 7?
Why do users have to upgrade to Windows 10 to use Linux binaries? Seems like Microsoft will do _anything_ to get users to upgrade. What are the privacy implications of Windows 10? Microsoft is very untrustworthy.
Will users be able to run their own Linux binaries on Windows?
Windows has never been a "pleasant experience". It's the unpleasantness of it that makes the alternative, UNIX, so appealing.
Apple is just as proprietary, commercial and anti-competitive as Microsoft here.
FWIW, this excites me because it potentially means I can go from two machines to one, and always have IE/Edge at my fingertips. It will greatly improve my dev workflow if it pans out like people are hoping it does.
I'm someone who doggedly persisted trying to dev on my windows box because the stability, speed, app support, GUI niceness of windows is just far superior to Ubuntu (I won't speak to OS X since I've only done minimal dev on it). I won't go into a lengthy defense of this claim - but will if pressed.
I put up with all the failed python module installations - the hunting around for the right VisualStudio compiler... the 64bit python install issues... on and on... I put up with it all... only to be defeated in the end by various node modules failing to install because they use ridiculous depth in their directory file structure that the windows filesystem can't handle. Our projected needed those dependencies. Something had to give.
So I tried vagrant VM with virtualbox - and shared folders... so I could keep my windows GUIs without needing to sshing everything to the VM. Somehow - even though the shared folders thing means the VM is ultimately using the windows filesystem - the node modules would install okay. BUt then I had problems with symlinks (which was solveable with effort)... But the worst thing was that various files, and sometimes whole directories would randomly have their permissions changed inextricably such that NO ONE - not even an admin user could touch them. The VM would get locked out, I would get locked out... it was horrid. It happened in the middle of a rebase once. Sad times... Sad... sad times.
So - I ditched vagrant and shared folders and use a totally contained VM with the ubuntu GUI... it's slow and horrid and it makes me cry... but at least I can alt-tab and waste time in a browser in the windows GUI if I want to.
So anyhoo - my concern. This approach by MS is going to mean everything plays with the same windows file-structure yeah? Or does the ubuntu thing get it's own self contained filey-bits to play with?
Cause if the former... then I will have the fear... THE FEAR... when I try to use it.
Well the task manager for example, the header is rendered per pixel, the rest of the window is a blurry mess. File Explorer window scales with DPI setting but for some reason the fonts still render at 4k pixel size even at 175% scaling making it very hard to read. Chrome looks like its rendering at much lower resolution then blowing back up. Resizing a busy window chugs my GTX 980 yet the Haswell integrated GPU on my MBP 13" handles 5120x2800 pretty well on OSX.
Why can't they just supersample the user-selected resolution then shrink back down to 4k like OSX? Everyone already has retina assets for their app.
You are making it sound like they are forcing, or even automatically upgrading Windows 7 to Windows 8, or Windows 8 to Windows 10. They aren't. You have to specifically choose the 8->10 update, even if you are getting updates automatically installed.
> I'm talking about the future equivalent of staying with windows 7 and upgrading to windows 10 because 8 sucks.
Which you can do. I'm not sure what exactly your complaint is here. What am I missing?
(Note: I found Windows 8 to be superior to Windows 7 in every way except the start menu. I find Windows 10 superior to Windows 8 in every way except for Privacy :/ )
Without thinking much I can come up with a list of instances where Apple did exactly follow where they think the market is. Example include making phones with bigger screens [0], using a stylus with tablets [1], smart watches, small screen tablets [2], the the list goes on and on.
[0]: http://www.engadget.com/2010/07/16/jobs-no-ones-going-to-buy...
[1]: http://www.engadget.com/2010/04/08/jobs-if-you-see-a-stylus-...
[2]: http://fortune.com/2012/04/17/what-steve-jobs-said-about-an-...
> Their position in the server world is not nearly so secure, which is quite possibly what this move is meant to address.
Microsoft never had the lead in server operating systems. So they're not doing this just because they are no longer the leader. This is a proof that Microsoft is just changing how they handle competition and FOSS under Nadella.
It'll get better through rewrites, new tools, different infrastructures etc long before it gets better through iteration on the same tools. That's OK though :)
[0] I do know that they are both parts of the VS build tooling. ;)
Maybe the other way would be easier, use the VM for all dev file storage as well, and export a SMB share that you can connect to from windows. Same sharing capability (as long as the VM is running), but you don't have to worry about different underlying file system semantics.
> So - I ditched vagrant and shared folders and use a totally contained VM with the ubuntu GUI... it's slow and horrid and it makes me cry... but at least I can alt-tab and waste time in a browser in the windows GUI if I want to.
Personally, I would just SSH for access to the VM though, as I find PuTTY superior to having a desktop as a window on a desktop (I would prefer to RDP to a local Windows VM as well). But I use Vim as my IDE, so it's extremely easy for me to do so.
That said, Visual Studio announced support for targeting Linux today (I assume either through SSH to a local VM or remote box and/or the local Linux support they announced here, so that might be an acceptable route in the future.
You could certainly say that none of that matters and what we should care about is the actual actions that each company makes, but I can definitely see how people would read more intent into things like this when Microsoft does it compared to other companies.
You chose literally two of the worst examples out there to try and prove your point.
[1] https://github.com/PowerShell/Win32-OpenSSH/wiki/Win32-OpenS...
Freedom, a full GNU userland, proper package management of the entire system, plethora of CLI programs which can fulfil your every need and only really work on GNU/Linux, configurable, etc.
> Every year I try a switch to Linux desktop. This year I made it as far as trying to get multiple monitors working well. I also dabbed in gaming. In the end I went back to my work=Mac game=Windows duopoly.
I use multiple monitors every day for my work under OpenSUSE and Arch. They both worked with either minimal (Arch) or no (SUSE) configuration. I use DisplayPort which works pretty well.
I set up the brand new Windows 10 Pro Insider Preview 14295 from MSDN. Hanselman's blog post says:
> After turning on Developer Mode in Windows Settings and adding the Feature, run you bash and are prompted to get Ubuntu on Windows from Canonical via the Windows Store...
OK, I turned on Developer Mode. Now what? What does "[add] the Feature" mean?
http://www.hanselman.com/blog/DevelopersCanRunBashShellAndUs...
right now my initial deploy+vm provisioning scripts are written in windows .cmd scripts, only because it means I don't have to have a C&C server somewhere. I'll be really happy to put all my automation scripts in bash and have it run regardless of dev env!
[0] https://en.wikipedia.org/wiki/Windows_Preinstallation_Enviro...
Homebrew also breaks with every major OS X release. It would be really nice to get some 1st party support on a package manager.
I feel like Apple does the bare minimum in this area to stay POSIX compliant. I would really like to see them feel some pressure here.
[0] https://www.reddit.com/r/bash/comments/393oqv/why_is_the_ver...
I code ruby running in this system (some stuff has issues, it's beta, and I'm doing Sinatra) but using Visual Studio Code as my editor.
How do these ubuntu tools interact with the rest of the windows system that they are running on ?
Can I 'kill -9 explorer.exe' ?
Can I touch a file in /mnt/c/(whatever)/Desktop ... and have that file actually show up on my windows desktop ?
I have successfully installed KDE on this (via Cygwin plus a mesa/llvmpipe build of opengl32.dll). It seemed to work OK.
Err. As a Mac user from when system 7 was fancy looking, and who learned some basic bash in order to do useful stuff like use curl, grep, cat, ls > .txt, rm's based on partial name matches, etc. my only response is "How about we talk about it over breakfast, lunch, and dinner?"
Bash is simultaneously graceful and nimble, yet clumsy. While it's certainly appropriate to worry that any reduction of the clumsy side would have a net negative effect, to not see, or not ack the many issues just continues to deny its utility to non-expert users.
All the yakka spent on avoiding the obvious solutions to justify "usage patterns", should signal any half-aware dev that those precious patterns are broken.
I'm surprised to hear someone preferring macbooks cooling though, my macbook always gets so hot relative to other laptops
Just for fun, here's a version of your quote from another perspective:
As a Linux user, FreeBSD does nothing to attract me. I dislike it because it feels like it's stuck in the 70's, with a mess of shell scripts for system and service management. Now with the SCO Unix compatibility layer they added something absolutely prehistoric to what already felt old. Count me out.
Also, how did you post this, is there a browser for FreeBSD now? ;-)
Not long ago Microsoft schemed to stomp out Linux and now they've had a change of heart? Fuck Microsoft! To this day even they engage in anti-competitive bundling with OEMs, not to mention their seedy history in relation to open source.
Android certainly doesn't have the userland that is typically (incorrectly) called Linux.
Also, why gnu and not bsd? OS X's find, grep, sed, and so on work fine for me and are not gnu.
"The Portable Operating System Interface (POSIX) is a family of standards specified by the IEEE Computer Society for maintaining compatibility between operating systems." (https://en.wikipedia.org/wiki/POSIX)
I think the command that you meant to say was killall :) `kill` will only kill pids, not process names.
I don't use a particularly fancy WM/DE under Linux but it's the little things such as middle click paste, and window operations such as maximize horizontal/vertical (by using the middle and right mouse buttons) that I miss.
Is it really that bad? At one former job I ran Ubuntu under VirtualBox with guest additions installed (new Linux team in an old MS shop).
Performance was OK, at least for the things I used - terminals, vim, Firefox. The only thing that really annoyed me in this setup was the need to switch between the VM and Outlook every now and then. Fortunately, Outlook's notifications worked even in VM running fullscreen (IIRC).
https://juliankay.com/development/setting-up-vim-to-work-wit...
I honestly have never found a problem with the Finder and miss a lot of its features (column view, drag file to file dialog, high-resolution previews for most file types, Quick View, and much more) when I'm in Windows or Linux.
RDP, Robocopy, Find.exe, Sort-Object, Select-Object, Select-String, Invoke-WebRequest, Invoke-RESTMethod, IIS, .NET, CMD, ASP.NET, and Compare-Object will fill most of those needs.
And Powershell accepts "cat" if "gc" for "Get-Content" is too long. Or "ls" if "dir" or "gci" for "Get-ChildItem" is too long. And each of these is a case-insensitive object you can pipe right into ConvertTo-Html or Send-MailMessage.
Or you can create a UDP socket with .NET right from Powershell, and send your objects that way.
Well, actually my "test machine" is a Windows 10 14295 VM running in Parallels. So if this ends up working (no reason it shouldn't) I will be running Ubuntu on Windows on OSX! :-)
Btw, FreeBSD runs Linux binaries for years, which is also irrelevant.
There's a stereotype that the open source people are practical but don't care about political issues and the free software people hate everything proprietary with a passion, but of course that's not always the case. Big companies like Microsoft aren't monocultures. They have some really amazing people, even if not everyone is perfectly enlightened. The path to more user freedom is allowing those good people to continue to push technology in the right direction. This is a step toward more freedom.
Many of us don't just use one computer. I use every OS on different desktops, laptops, tablets, phones, servers, and consoles. They might not all be equally free, but I only need one to be fully free to know that I have freedom that can't be taken away. Even for those who don't have a fully free system, the most important thing in my opinion is that the option always exists. If they aren't served by proprietary software, they have somewhere to go.
GNU won against all odds. It's here to stay, proliferating across so many devices. I'm happy to welcome people who might not have ventured outside Windows into the family!
You are right. My english is not very good and I did not read what OP wrote carefully about this. Maybe it was the silliness of the word "slave" being used like this that threw me off ;)
Those third apps updates can be annoying but I honestly cannot complain about all those problems you talk about, like processes sucking up resources, popups, and adware automatic installation on updates. I do not even see this happening with (very) non tech people around me. So I think it is a very suspicious argument. Even worst would be to suggest that those are problems are Microsofts blame. Maybe you get those adwares exactly because you think UAC is annoying. Can we blame Google when a user get a virus ridden app from a place other than the official store and ignore the OS warnings?
But if this is the reality, it is another good argument for the push for windows 10 update and the adoption of UWP. In fact, I think microsoft should push even harder for windows 10 updates, it is the right move.
Also, I think that the idea of maintaining a Windows machine updated with only the parts the user wants is hilarious. And I do not know who are those people you talk about. I love to test OS previews and I have never heard a person who already do not liked Microsoft for whatever reason make a big deal about a UI update (windows 8 "metro" mode was shit but easily ignored, windows 10 UI is better and amazing).
The more updates and innovation, the better. I am not afraid :)
Remember OS/2 2.x? It could run Windows 3.x binaries, including GUI programs. The result was that noone wrote programs for OS/2. Windows programs would run both on Windows and on OS/2, so why write another one for OS/2?
Why should anyone port Linux programs to Windows now? Just write for Linux and it will work both on Windows and on Linux. So now you actually have more reason to target Linux.
I remember this clearly as if it was yesterday, because I tried and failed to build a Messenger-compatible client. They defended exclusive access to the API fiercely.
Their attitude seems so petty now in retrospect.
I think your argument actually supports parent's position, so this "plus" in the next sentence seems out of place :)
There, now we're even. Let's get back to EEE, and the real porblems and advantages with this news.
A team of sharp developers at Microsoft has been hard at work adapting some
Microsoft research technology to basically perform real time translation of
Linux syscalls into Windows OS syscalls. Linux geeks can think of it
sort of the inverse of "wine" -- Ubuntu binaries running natively in
Windows. Microsoft calls it their "Windows Subsystem for Linux".
Sounds a lot like Rosetta [0], which Apple used to transition people from PowerPC to Intel. ---
[0]: https://en.wikipedia.org/wiki/Rosetta_(software)I can confirm that from personal memory.
At the time, the biggest vendor of UNIX-like systems was Microsoft (Xenix), so compatibility was a benefit. They soon sold that product off, though.
Apple ran a fairly successful campaign in a number of highly technical publications shortly after OS X came out pushing the concept OS X was not only Real UNIX(tm) but also that a Mac was the best Unix workstation you could buy. I'm guessing that's the one, it certainly worked on me.
Here's one of the ads: http://www.brainmapping.org/MarkCohen/UNIXad.pdf
I think there where a few others. I seem to recall one showing OS X running matlab.
Not that I am against WINE. I think it allows me to just ditch Windows entirely.
I run StarCraft II on WINE 1.9 at a higher framerate than what Windows provides. That was probably the only reason I would use Windows for.
https://en.wikipedia.org/wiki/Embrace,_extend_and_extinguish
This is well known and I think it would be really hard to pull off the same trick again.
Isn't it far more likely that this is simply another execution in their recent opening up of their platforms to developers?
You are mixing them all and that's how the debate gets stuck into some neckbeard-limbo that nobody cares about.
Society made a lot of progress when religion and state got decoupled from each other. There are some things that should be handled separately.
What I have to say about this is:
Technology-wise, GNU/Linux software is separate from that of Windows at the binary level as well as dependencies. For them to extend such software means that they would need to build on that. That would extend the GNU/Linux ecosystem.
Legally-wise, open source software is protected by open source licensing that requires derived software to also be licensed as open source. It is challenging to achieve the "extend" part of the "embrace/extend/extinguish" loop if open source licenses are in place.
In terms of values, they're a for-profit corporation trying to reach out to developers. Same as every other company. They have open sourced .NET, they've released some of their actually important software on Linux (SQL Server), they have embraced the Linux platform on their cloud environments... everything possible to appeal to developers. It doesn't appeal to me, though.
[1] http://www.cnet.com/news/sun-microsoft-settle-java-suit/
That said, for most development work I prefer Linux, and run a Linux home server as well as various VMs, so this announcement sounds great to me.
And as a final aside: basically all of the information gathering that people complain about with Windows can very easily be turned off. In fact, for the most part the installation wizard actually leads you through how to turn it off. For me that makes it a non-issue, although I understand that some feel differently.
I think this is the third time I'm writing this reply on HN. Seems to be a common misconception.
Seems to work only with WriteConsoleOutputW(), while C I/O behaves unpredictably.
But if you want to talk about userland, then you need to buy supported equipment. OSX doesn't work with a ton of equipment out there - as that equipment does not come with OSX drivers. Windows also works like shit when it doesn't have the drivers - anyone who has had to install XP regularly will quite happily attest to just how terrible it is at supporting network cards before you install drivers.
Then, of course, there's the bonus of the AMD Catalyst driver installer program for windows: at least as recently as win7, if you didn't have drivers for video, it fell back to VGA graphics. The Catalyst driver installer was too large to be seen on VGA - you couldn't see the bottom of the installer window to see what was going on, and couldn't drag the window high enough without a hard-to-discover key chord. :)
The argument to "buy stuff your operating system supports" sounds like a cop-out, but it really isn't. OSX, for example, is difficult to make run on things other than Apple-designed computers, but if you complained about it, people would write you off as an idiot.
Some people have compared Mac OS X and Windows because of their proprietary nature, but one key difference is that you don't run Mac OS X as a server operating system or on the cloud. While many people develop on Mac OS X, they might still build and deploy to a Linux server.
For Windows to support the GNU/Linux userland is to finally empower Windows as a competitive platform for the cloud. Windows has suddenly became a viable deployment target for a wide spectrum of software that prior to this could not target Windows.
I am not looking forward for a lower market share of Linux (or BSD, Solaris derivatives, to be fair) on servers or on the cloud.
I respectfully disagree. The OSX kernel XNU is open source, as are a ton of its components. That's huge in a lot of situations. Some things - not a lot, granted, but some, like FreeBSD's C++ stack and compiler - are even upstreamed back to mainstream open source projects by Apple employees.
Sun argued that Microsoft was intentionally breaking compatibility, but the other side was that Microsoft was actually exposing more developers to the fledgling language and providing a GUI that felt native to the rest of the OS. When Swing finally came out, it felt like you were running under CDE. That made me avoid running or writing Java applications for years.
In a lot of ways, Android repeated the exact same thing. Dalvik applications won't run in the Oracle JVM.
Apple + Microsoft "collaborating" on Macintosh software = Windows
IBM + Microsoft "collaborating" on OS/2 = the NT kernel, Windows NT
Sybase + Microsoft "collaborating" on Sybase SQL server = MS SQL Server
Sun + Microsoft "embracing" Java = .NET Framework
It's a lose-lose situation in cases like this.
Like with their SMB support? When, given the choice of complying with the GPL or re-writing an SMB client from scratch, Apple chose the latter and subjected users to utterly broken SMB support for several releases just so they did not have to open source their pitiful collection of patches?
Are they? When did Microsoft's EEE strategy benefit the user and lead to the best technology? IE6? JScript? ActiveX? J/Direct? MSN Messenger?
They have identified the core of their business and they feel comfortable there. They get a 30% royalty for each App Store transaction, they make large profits selling iPhones and Macs. They monetize their software indirectly as a part of a larger end-to-end solution.
Microsoft during their monopolic era was much more beyond that, they were going for all.
http://www.apple.com/osx/server/
It's actually a pretty decent server environment.
And when I'm stuck on Windows for whatever reason, I'll be immensely grateful to be able to do that with a bash shell and all my normal configuration and tools.
Sure, that's what this whole thread is about. But I'm afraid that there is causality involved that we know about without collecting any sample data at all.
Developers of Linux device drivers for consumer devices quite often do not have access to all the hardware and information they need. And hardware makers quite often do not make high quality Linux device drivers for their consumer devices. For most consumer laptops, the vendor will not do integration testing to make sure everything works well together on Linux.
I don't think the right response to that is to deny that the average laptop or peripheral will work better with Windows. The right response is to make it clear that you have to make very deliberate hardware choices if you plan to use Linux and accept that much of the hardware you can use is always going to be slightly dated. And that is in fact what many Linux advocates are saying.
Choosing a Mac also restricts your hardware choices quite dramatically after all, so if a broad selection of hardware is your goal, Windows is the only game in town.
Yeah I'm sure all those celebs that had all their nude photos stolen from icloud would totally agree that Apple takes great care with personal data.
Are you trying to justify needlessly spending 600$ on a phone every year? That's your prerogative, you don't have to convince strangers on the internet.
Apple took KHTML and when it started to fail, forked it into WebKit. Google took WebKit and when it started to fail, forked it into Blink.
When IE6 started to fail, the whole industry suffered. (And is now suffering again as Apple refuses to allow any other rendering engine but their failing WebKit port on iOS.)
In my context as a software engineer or as a private individual I didn't find your argument convincing from the point of view that it would expose any specific problems that are of my concern. If I was responsible for confidential data my views might align more with yours.
"As long as Microsoft continues to disrespect the rights of users in regard to privacy, data-collection, data-sharing with unnamed sources..."
I'm already giving up most of my privacy in the general context when using an off-the-shelf cellphone (in the sense that I have no idea how much data leak my daily cell phone use causes and I'm fine with it since I have no time to implement a personal data safety plan).
If I wanted privacy I would stop using technology altogether. I just want things that a) work with b) minimal financial risk.
"when most of what makes Linux special (respect for the user) has been stripped away."
Sorry, for me it's the "Linux the technology stack", that make it special, not the "Linux the philosophy".
"please don't sell your souls and the future of software technology for ease of use"
For me personally, 'ease of use' is the single most important optimization parameter when choosing technology. Although, in my definition, this encompasses things not only directly related to daily use, but include license cost, security and data loss prevention. The most important thing I care very much for is retaining copy right to my own data.
Why 'ease of use'? Because the only thing that truly constrains me in this world, is time. As in, how long I have to live, and how much effort I need to reach my goals. When put in this context, I don't care of the philosophical implications of the way I solve problems and implement things - I just want them done.
I.e. if the product does what I want, I don't really care of the philosophy. Perhaps I have too much faith in market forces and jurisdiction prohibiting monopoly but I fail to see much personal benefit in an all encompassing "platform philosophy" as to me, technology platforms should focus on solving real life problems.
"please don't sell your souls"
I don't pour my soul into the technology platform I use. I pour my soul on the design itself - the implementation on the platform is just the implementation of the design that could very well live in an abstract turing machine. Although the implementation running in live hardware is much, much more fun (and exposes the bugs).
You take risks and bets in life and business. Use the tools that give you the greatest flexibility and build a robust business. In this case, these announcements increase that flexibility.
While everyone here keeps complaining and debating Embrace/Extend/Extinguish, someone is embracing the changes and out-executing them. I know which I'd rather be.
All systems have had and still have security flaws. There is a world of difference between having a unknown vulnerability (which affects and will affect all platforms) and intentionally spying on users and stealing their information.
More specifically, it's the state of Linux device drivers and integration testing for laptops and consumer peripherals I'm complaining about. That's not a result of any inherent deficiency of the Linux kernel or its design. It's ultimately an economics issue.
This is true for a certain type of users (developers deploying on Linux).
For a different class of users (desktop/laptop users, or developers developing on Linux), Linux has a documented history of "Fuck you very much".
Microsoft has recognized that there is a large overlap between these two classes.
And Microsoft manages to undermine RedHat at the same time. Win-Win for them, why the Linux community should be excited about it, I don't know.
And also the requirement of specialist tools to do non-specialist jobs like set the engine timing, and then charging a lot for purchase of the tool. It will be interesting to see how serviceable electric cars become once they are old enough to start being parted out and the used market expands to those who 'just need something to go from A to B'.
Yes, absolutely. Bill Gates was not a "nice" man. These days many people fawn over him "oh he's so lovey dovey, he's going to save the world with all his money as a philanthropist". LOL -- how many have actually sat at the same table (behind closed doors) with Billy boy pounding the table telling everyone attending (ISPs, major telecoms) how Microsoft was going to run the show, run the world. Microsoft is a ruthless company. Satya Nadella was part of this ruthless culture when he first joined MSFT 25 years ago. Why would the ruthless culture of Microsoft suddenly change because they've figured out how to peg Ubuntu Linux to the kernel? Ever hear of a Trojan Horse? This announcement today sure smells like horse manure.
It isn't just about the philosophy either. Not to me.
Linux offers a stellar scheduler, phenomenal file systems such as ext4 and XFS (soon ZFS!), cgroups… the list goes on.
RMS was foresighted enough to make licensing a core part of open source. I have a deep respect for the man.
The Achilles heel for open source software remains to be patents. In that regard I think many proprietary players still have the upper hand.
That's a good point. Its all about having real options (freedom to move) and minimal switching costs. That said, I'm still concerned about a possible Trojan Horse scenario here whereby Linux on Windows is the hook to try and get people into the proprietary Windows dev tools (Visual Studio etc.) and checked into the Azure "roach motel" cloud (easy to check into, hard to check out).
> Big companies like Microsoft aren't monocultures. They have some really amazing people, even if not everyone is perfectly enlightened.
Microsoft most definitely has some amazing and talented people, but I disagree with you about culture: the culture of any company is undoubtedly set from the top down (the founders or directors). Please do not be so naive to think Satya Nadella does not set the culture at Microsoft, (hierarchical in nature). This isn't to say there may not be some fiefdoms within a company as large as Microsoft, but there is an overarching culture and it comes from the top.
> Many of us don't just use one computer.
This is probably true for some, but some people might only be able to afford one computer. One scenario I can see which might be appealing to a developer, as of this announcement yesterday, is using a MacBook with Apple's Boot Camp to partition the internal drive such that one could have as many options as possible (OS X on one partition, and Windows 10 with Linux on the other).
> GNU won against all odds. It's here to stay, proliferating across so many devices. I'm happy to welcome people who might not have ventured outside Windows into the family!
It would be really cool to hear what RMS (Richard Stallman) thinks about this. I wonder if he's be up for an AMA on Reddit to address this seemingly earth shaking announcement by MSFT?
And also because the user experience of OS X (and the "just works" aspects of the OS in general) is far above that of any Linux desktop.
That's not true. They're automatically upgrading computers. Read some of the thousands of below comments to hear the stories. My Windows 8.1 laptop automatically scheduled itself to upgrade, and I was fortunate enough to be paying close attention to cancel it.
https://www.reddit.com/r/technology/comments/4a0asv/warning_...
https://www.reddit.com/r/pcgaming/comments/4a5edx/psa_window...
But if we really need to be juvenile and discuss everything to discuss anything, Apple is a greedy, voracious company. They are never presented otherwise, and cynicism meets all of their activities. But it's open and honest, and Apple doesn't try to be who they aren't, and users don't treat their actions as selfless gifts to the world. We all expect almost everything Google does to somehow pull in more ad data, to pull people to the fold, etc.
It's only with Microsoft where this naive "oooh, whole new company. So good" nonsense appears, and it grows incredibly tiring and seems more like a bad astroturfing campaign.
I think he would say what he has always said about secret software. He's fairly consistent in that regard.
There's lots of layers within the Windows kernel. They give a lot of functionality, and from many perspectives are superior to unix. For example, it would be far easier to write a massive and robust and sensible init system on top of Executive Services than it is on top of unix. But, I can't see how they get the extra functionality except by introducing larger overhead for things like forking. And certainly in my previous experience of NT, forking has been incredibly slow.
Everything-is-a-file is much talked about with regard to unix. But fast forking is far more significant. Apache happened because of fast forking. Shell pipes assume fast forking. The way that you write shell scripts assumes fast forking. One of the reasons that cygwin has never felt right to me is because the forking is sluggish. I don't think that's cygwin's fault, I think it comes from the design of the Cutter kernel.
The hybrid they're offering here is probably the sweet spot - getting the strenghts of Windows, but getting access to a full unix layer.
Based on what, exactly? That they opened part of .NET, except that it's only the web stack, and not the part everyone wants (WinForms)? That they released a reskinned version of the Atom editor? That they announced the release of a Linux version of SQL Server, except that it will be a simplified version, absent of the enterprise features? That they submitted C# to ECMA, as though this allowed anyone to port a realistic application to another platform, or that the world has any use for a closed-source language and compiler today? That they allow you to run Linux VM's in Azure, as though Azure could be competitive if they didn't?
Now this? I mean, sure, there are times I'm working in Visual Studio, and it would be convenient to use some shell commands like "cut" and "sort" without having to use Excel, but the implication of this announcement is that I'm going to do serious work with GNU tools under Windows? Like, I'm going to do Linux-type development work while being hamstrung by reboot every couple of days for the next someone-can-take-over-your-computer-by-looking-at-it-cross-eyed patch?
Maybe you haven't been in this business for 23 years, and haven't seen how many products Microsoft bought and spiked to make sure to keep their stranglehold on the ecosystem. (I'm still bitter about Groove.) Now Microsoft is on the precipice of being as irrelevant as the IBM they mocked 20 years ago, and these moves are only at attempt to extend their relevancy a little longer, but which don't actually mean anything.
You say Microsoft is different. If, by that, you mean that they're making a lot of moves that seem like desperate attempts to make people remember they exist in the post-PC era, then, yes, I agree. Until Microsoft releases Office and Exchange for Linux, they will never been seen as anything other than Gates'/Ballmer's Microsoft in my eyes. Office suites are hardly important any more, and lots of companies are just using Google apps instead of Exchange and AD, but that's the kind of move they'd have to make for me to take their "Microsoft Loves Linux" campaign seriously.
It's absolutely not a non-issue when I was helping my mother set up her new Win10 laptop.
She doesn't want to be tracked, I don't want her to be tracked.
But currently she is being tracked by Microsoft very much.
As are millions of other people that REALLY do not want to, if they had the choice.
There was no wizard when I got there to help her. Obviously she had set up most of the laptop herself because she's not intimidated by computers and really pretty good with them (at age 66). Except she couldn't get email to work.
And neither could I, btw. She wanted two accounts in the windows 10 default mail client, one from the ISP and a very old hotmail account. Somehow this just wouldn't work and the stupid mail client was actively hiding the information I (or she) needed to troubleshoot the problem (not just a bit, I got like zero information. and a numerical error code. it's beyond me why MS would want to translate standard error messages from IMAP and SMTP into something even more cryptic). I ended up installing Thunderbird, which works.
But I digress. My mom is not a HN-reader, so she hadn't quite heard about Win10's totalitarian surveillance features, and when first setting up her laptop she wasn't quite sitting there in the adversarial position of "I'd rather brick my system than let it spy on me" that any of us would need to assume in order to later claim "the information gathering that people complain about with Windows can very easily be turned off". She doesn't want to be tracked, so I'm sure she ticked off a couple of the checkboxes that were clear about it.
But maybe not the ones that popped up warnings about your system. Or the ones that were misleadingly worded as if they are something beneficial ("just like the US government does" is not really a bar for good intentions) that you need to Google to figure out what they really mean. Or the tiny checkboxed that only explain their maliciousness in a very tiny font next to the big friendly letters that just want to steam you through the wizard. (I didn't see any wizard, but this is the deliberately misleading I remember from win8, obviously intending to grab as much private info from users while pretending to give them a choice).
During the afternoon I helped her, I only had time to set up the email and do a few other things. In addition of having to wait through 45 minutes of some giant update, which upset her a bit because it was a brand-new laptop, high-end thingy, why did we have to wait for things to happen? Damn right. The longer I'm on Linux, the more and more ridiculous it seems to have the OS yank control from you at what are probably the most inopportune moments (boot-up and shutdown ... really?!)
So I didn't have time to help her, Google for all the privacy options in Win10 and disable them.
What you probably don't know, because you "easily" disabled them right away, is that after this initial setup phase, Win10's remote surveillance features are as quiet as they can possibly be. (until you try to disable them of course at which point they'll scream bloody modal murder)
And with her, that situation is by no means unique. There are millions upon millions of sufficiently able people running Windows that are currently being tracked, against their wishes, because the options are misleading and/or hidden. And they even pay Microsoft for the privilege ...
So yeah no. Just like computer security in general isn't about you being safe and protected from criminals, that's easy enough, unless you're being targeted, if you are clever and paying attention, don't click the shady popups/emails and you're mostly fine.
Criminal phishers or ransomware peddlers would be out of a job (or be more clever) if everybody has the knowledge like that.
Just like Microsoft wouldn't have even bothered doing all that work on their surveillance tracking systems if the features would have been as "easily turned off" for everyone that doesn't want to be tracked.
Just like any whitehat hackers, on the rare occasions I hear them about the fundamental moral reasons why they do what they do, in addition to "hacking is fun" (also goes for grey/blackhat), is every time not because they and their tech-savvy friends need their protection, no we're catching the bad guys and fixing the vulnerabilities because our mother, grandfather, neighbour, partner, nephew and that friendly man at the cigar/magazine cornershop need the protection because they don't want to be hacked just as much as we don't, but not everyone has the time to dig into computers as deeply as us to protect ourselves.
And you know, those very same people also don't want to be tracked, given the choice. At the very most they'll shrug and admit defeatedly "well, yeah, ... I don't mind so much, I guess" -- because to them it seems to be price to pay for not having to really dig deep into their system and just be able to use that laptop for email and web-browsing.
But no, they don't really want to be tracked. It's NOT a choice, for them, it factors as an additional cost of being able to use a computer.
Source: I teach people (of all ages, but mostly children 8-18y) general computer usage. None of them want to be tracked. None. The ones that even claim they do, I haven't met a single one that, after sitting down and talking for a while, didn't just boast "oh, I don't care" because the alternative would be admitting that they don't have the skill/knowledge to fix the situation, or because simply not using Facebook because it's creepy as fuck, would be social suicide. That's not a choice. It's not a choice!
Just because we (hackernewserpeoples) are clever enough to opt to not pay that cost because it's technically optional, doesn't make it a "non-issue".
I was going to agree with you and admit I was being melodramatic...(well - I mean, saying that it makes me cry was certainly melodramatic - I don't really), but y'know what... it's definitely not ideal.
e.g. Scrolling in my ide.. sometimes lines of code don't refresh properly until I scroll back and forward a few times.
And with dev server, webpack watchers, test watchers open plus browser with a few tabs... yeah - it can get pretty sluggish. Maybe I'll try throwing a few more gig ram at the VM.
There's also a logical fallacy that because Nadella worked for MS for the past twenty five years, he's as power hungry as Gates. This reeks of the type of elitist nonsense the keeps people from wanting to adopt Linux.
EDIT: Removed insult, came with arguments instead.
Now some here would respond to this as - well - what they've done just shows how big the conspiracy is and how far it extends! There's never a full comeback to that. But I think there's a more straightforward explanation.
The straightforward explanation would be this. Windows is no longer relevant in the way it once was - the cloud is the new platform, and people use tablets and phones. And - a lot of developers hate Windows. OSX has emerged as the dominant developer desktop.
They've realised some combination of this, and formed a company directive, "make developers love us".
Even if it means giving developers non-Windows platforms to work with. Like dot net for linux. Or MSSQL for linux. Or free Visual Studio for linux. Or linux that runs on a Windows kernel. Making a solid effort to do linux for azure. And basically giving away Windows as a desktop. Whatever. The things developers need, they're trying to do that. They want to be where the developer are. They're putting their back into it.
The business case is the kind of vague thing that startups take - find some users, make them love us, and then we'll work out how to make money from it.
Imagine the first reaction in the Redmond office when someone read out the feedback post asking for vi and apt-get. Groans all around. But someone in that room responded to the laughter with an "I know, I know" smile and asked, "OK, but what would it take?" And then everyone perked up, and had some fun with the conversation. And they came up with this. And someone saying "shit, we /could/ do that, and it would be amazing!" The people who were in that room will remember that as a career event.
I think it's cool. I want to play with it.
We've actually even seen this before: https://en.wikipedia.org/wiki/Microsoft_POSIX_subsystem
(EDIT: To nitpick, I think the MS POSIX subsystem actually implemented the POSIXy standards as native code, as opposed to the translation layer that's mentioned here).
Microsoft once offered to help Apple to make Mac OS a widely-used industry standard. Apple decided it would rather sell $2,500 PCs than $50 software ;-)
You've taken the words right out of my mouth. At first blush, Linux interop on Windows seems like the best of both worlds, but in reality you're giving up a lot more than you gain.
And reminescent of Cooperative Linux, although that worked at the driver level and allowed the actual Linux kernel to be used.
(OK, I assume there's a small number of developers who develop, or at least debug, for both systems and prefer windows as a development environment, but I assume that number is small, at least on Microsoft's scale).
If you're developing to deploy on Linux but are more of a Windows dev, this helps you, but that doesn't help Microsoft ship more server OS licenses.
If you're a Windows dev this is irrelevant.
If you're a linux (or posix only) dev I don't see how this helps you much. It does help a person like me, who only uses Windows when I need some weird tool like a compiler for an exotic embedded part or vendor-supplied FPGA tool that only works under Windows -- again, not a large enough market t move the needle.
Could the market be CIOs? I.e. demonstrating "hipness" in a way that can be verified when the CIO asks the devs "does this really work the way MS claims?"
Obviously it's not opening the huge number of popular Linux desktop apps to the Windows environment. :-(
And it's pretty damn light for a 15" laptop. Easily one-handable (I think it's like 2 pounds IIRC).
And what's great is that you can crack it open and replace stuff - I swapped out the hard drive and replaced a malfunctioning battery myself with just a torx screw driver (super tiny torx, but still, no glue or special tools or anything).
It was pretty expensive - like $1900 IIRC. You're paying for the high res screen, better build quality, and thinness/lightness. But it's still like $500+ less than an equivalent macbook.
Could it be something to do with helping Microsoft sign up more Azure customers using Linux? Windows dev, Linux on Azure deploy, end to end Microsoft.
Now Windows has some pretty major issues (utf-8 in CMD is agonizing), but it comes out of the box more usable (to me) than either OSX or the vast majority of linux distros. If I can have my nice window management and also the ubuntu user space, I'll be a very happy camper.
[0] https://en.wikipedia.org/wiki/Microsoft_Corp_v_Commission#Si...
It falls on Developers - moreso than any other group - to be aware of the dangers the work they perform produces and for whom their work benefits. They must all be aware of the abuses of our basic rights and human dignity modern Private Enterprise engage in.
When these same Enterprises pull developers from a system (or systems) that value our individual rights to one that tramples all over them... it angers me. It begins to limit the Philosophically, user-empowering, alternatives we have. When developers rejoice over the actions of an abusive Enterprise, it disheartens me. It feels like the bigger picture is somehow being missed.
To paraphrase Benjamin Franklin: 'Those who give up their freedoms for temporary ease of use deserve neither and will lose both.'
I do not mean this to sound harsh. You can both work within the current technological framework (as we all do) and, at the same time, rail against a future that runs counter to our core beliefs. It's ok to do both. It's ok to work within a Microsoft-produced framework and at the same time let them know that some of what they're doing is counter to your belief system as a private, law-abiding individual.
What we must not do is defend the Police State we're currently building. It runs counter to everything we hold dear as a democratic and free society. Counter to the best kind of future we can envision for ourselves and future generations.
The chasm between the 'haves'' and the 'have-nots'* is only getting wider. The very concept of a fair society where all men are created equal diminishes. Enterprise OSes produced by Microsoft, for example, have privacy-enabled features the common man does not have access to. The common man - you and I - are now constant targets whereas Corporations, those in Government, those in law enforcement and many others accustomed to living above the law continue live under a different set of rules. This is the emerging new standard.
China will soon get a special build of Windows 10 without telemetry, without "phoning home". I am certain that this special build will contain the same kind of malware and abusive spyware that benefits the Chinese Government over it's own citizens. So... we, possibly have, an American software company building and deploying tools for repressive regimes. Yet we have become so complacent, there isn't even a discussion about it. That's how bad things have gotten.
We shouldn't pretend that this philosophical disconnect is not the biggest change in development. It is essentially the only real difference between Operating Systems/Working Frameworks. As good people... I'm saying that we should never, ever defend it, accept it and be complacent about it.
> utilized a lot 25 years ago.
I don't know about you guys, but I'm not gonna hold onto that grudge forever without good reason. The world's changed a lot since gif's of Calvin pissing on bill gates were all the rage.
But perhaps at this point it's important for Windows to stay relevant in an open source world.
http://www.nivot.org/blog/post/2016/02/04/Windows-10-TH2-%28...
I'm not sure which OS this helps/hurts more in the long run, but I know I'm happy.
Disclaimer: I work for Microsoft (double disclaimer: but I'm almost fresh out of university)
The few hours of recreational computing I perform daily have no effect on how the world works. To imagine otherwise would be hubris verging on insanity.
The software I write at work cannot be used to invade anyone's privacy.
I see what your point is but totally fail to connect it to my personal daily reality.
That's your opinion, and your certainly entitled to it. I value simplicity, and IMHO it's easier for me to wrap my head around FreeBSD than Linux. I've used Linux of course, but I consistently find that it (the userland, system configurations) violates the rule of least surprise. If Linux works for you, certainly use it. Just realize a lot of us UNIX guys aren't thrilled about Linux, and aren't going to be thrilled about this subsystem because we have different values than the Linux community.
And by the way, FreeBSD's init system solves a lot of the same problems that systemd solves, albeit in a more transparent fashion. Zfs is also great. FreeBSD users have enjoyed containers now for quite some time, and didn't need Docker to do it.
Your remark about the supposed antiquity of FreeBSD reveals a fundamental ignorance of its technologies and principles.
No, because Android is not a JVM, was never presented as an alternative to Oracle JVM... and never intended to replace existing Java VMs anywhere.
I would LOVE to ditch OS X and run Linux on it, only problem is NO DISTRO supports latest hardware. There are always things that don't work and it gets tiring.
I tried to run Ubuntu on my old Dell Laptop... there would always be some issues related to graphics card, wifi or some shit, overheating, battery drain... or something not working. At the end, had to go for Windows with Ubuntu on vagrant boxes and Desktop Ubuntu in Virtualbox.
Then on my new Macbook Pro, I wanted to run Ubuntu 14.04... but of course, so many things don't work... like right clicking on the touch pad, WiFi or such simplest of features you'd expect to be supported in such widely available and pretty standard hardware... but NOPE. So, it's vagrant and Virtualbox running mostly Ubuntu on OS X again. I am actually considering installing Windows and running Linux on a VM inside it.
MS have been pushing Win10 onto Win7/8(.1) end users for what seems like a few months now, continually escalating how forceful they're being.
eg recent IT media about it:
http://www.theregister.co.uk/2016/03/17/microsoft_windows_10_upgrade_gwx_vs_humanity/
As you mention, it seems like straight out PR suicide.Personally, it would be useful to know what their end game is justifying all of this bad karma. It'd have to be fantastic. Either that, or someone inside MS is seriously out of control. :(
It's not good, but it's not forcing upgrades either (which is liable to get them sued).
I use Mac OS X and this is what I miss about Linux:
- Clean package/software management and updates.
- Basic customizability without having to install 3rd party binaries from untrusted sources
- Multiple filesystem support (NTFS write not supported out of the box... needing 3rd party software... ext support has to be compiled, breaks with updates... generally PITA)
- Easy installation of software/libraries from source (generally PITA to set up toolchain to compile general cross platform open source software)
- Proper desktop environment (the one that OS X has is shit compared to Gnome)
- Proper file manager (the one that OS X has is shit compared to Nautilus)
- Up to date and standard command-line tools (The ones in OS X are old... for example check the version of unzip... they can't unpack zip files created by new zip tool)
- Better command line system wide file search (I love mlocate, the one in OS X is shit)
- Ability to run docker natively
Unless... There's some fundamental core security problem in earlier Windows versions that isn't in Windows 10 and they don't want to tip off anyone to what it is, because it's so large and egregious it opens them up to a lot of liability and lawsuits. Okay, I'll take my tinfoil hat off now...
Now, I would be glad if they open source WPF, which is very well designed GUI framework that could be great for cross platform apps, but it seems like they've kinda abandoned the XAML front. Even releasing existing code to open source takes resources, and Microsoft is not charity.
I'd say the same thing about porting Microsoft Office to Linux. Microsoft has, in fact, released Office apps for Android, and they would definitely rush to release office for desktop Linux if it had a non-negligible number of users. Maybe next year - after all 2017 is poised to be the year of the Linux desktop!
So Yeah, I think Microsoft has really changed. Of course it's all because they're not the market leader anymore and they need to survive! Yes, they can no longer succeed just by making a buggier version of tech X and pimping it on MSDN Magazine and have flocks of developers run to implement their latest version of COM++. So what? It doesn't make it any less real.
You should be suspicious about Microsoft's motive as much as I'm suspicious about Google or Apple or any other large company. But bringing up Embrace, Extend and Extinguish every time Microsoft does something makes you sound only a little less anachronistic than writing their name with a dollar sign.
Apple was notorious for suing Samsung of using the patented shape of a rectangle and while this was a misrepresentation of the way design patents work, most the patents they've used in their lawsuits were frivolous as well.
It might in theory tempt developers to migrate from Linux to Windows more comfortably, with both userlands now available at once, but this doesn't really attempt to make that process easier, other than that.
Besides, I don't feel like Windows is a that attractive platform to develop for nowadays with the cesspool that is Windows Store, the pointless new Universal Windows Platform when your apps need to follow a weakest link due to an almost non-existing Windows Phone market, combined with their lack of leading position on the web.
I think all this is what's bothering Microsoft, because with no development steam, the whole platform suffers a lot. I think the transition at Microsoft lately is happening because they are transitioning from a comfortable leader to a competitor, not because they are trying to squash the competition. They probably long for the days when they were in a position to still have that luxury.
A solid majority of the people on the day side of the planet who are looking at a computer right now are looking at an Excel or Word document.
MS Office has 1.2 billion users [0] (and that probably doesn't include unlicensed users). That's pretty important.
Over half of my 25-year career has been involved with making applications to actually address the business need that people were working AROUND with Word and Excel. I can't complain; it's been a pretty good deal. I'm doing 2 side projects to replace Excel sheets with Rails apps right now. But with more and more "apps" on smartphones and web sites, on the low end, and gargantuan cloud apps like Evernote and Google apps, this space is going to continue to shrink.
The thing that probably won't die is friggin' PowerPoint. If I had a nickel for every slide I've had to look at...
The year of the Linux desktop is the year that "the desktop" no longer matters. With all the smartphones and tablets, and single-board computers making embedded products, we're juuuust about there. ;-)
~ $ scoop search grep
main bucket:
busybox (1.24.0-TIG-1778) --> includes 'egrep'
gow (0.8.0) --> includes 'egrep.exe'
grep (2.5.4)
nim (0.11.2) --> includes 'nimgrep.exe'
pcregrep (10.20)
rktools2k3 (1.0) --> includes 'qgrep.exe'
~ $ scoop install grep
installing grep (2.5.4)
loading http:(...)grep-2.5.4-bin.zip from cache...
checking hash...ok
extracting...done
loading (...)grep-2.5.4-dep.zip from cache...
checking hash...ok
extracting...done
creating shim for grep
grep (2.5.4) was installed successfully!
~ $ grep --version
GNU grep 2.5.4
Copyright (C) 2009 Free Software Foundation (...)
Getting colors to work is a bit of work, as windows conhost doesn't quite come with vt100 support out-of-the box yet:http://www.nivot.org/blog/post/2016/02/04/Windows-10-TH2-(v1...
But that's just a:
~ $ scoop install conemu
installing conemu (150813g)
downloading (...)
Note that conemu is in the "extras" "bucket" for scoop: scoop bucket add extras
I work for a big consulting company - 70k employees. Corporate standard is a Windows laptop for everyone. But, most software development is done for Linux environments.
I hate windows scripting / powershell, so I use cygwin a lot. I don't love cygwin, but it's my "least worst" option. Local linux VMs are too heavy.
So, I'm excited about the option to have a 'real' linux locally, with a working package manager.
What Microsoft has created is binary translation for Linux system calls, like Wine but allowing Linux to run on Windows instead. FreeBSD, among many other OSs, has done something similar for a long time, precisely because Linux is the most popular Unix-like OS. In addition, Microsoft doesn't care about the drivers -- everyone still writes drivers for Windows.
But as to your point, not all free software licenses are copyleft. The MIT and BSD and Apache licenses are all proprietary-friendly licenses.
And as for requiring to extend the GNU/Linux ecosystem, I don't think that's true. It just looks like they've implemented syscall compatibility with Linux (something that FreeBSD has had for donkey's years and SmartOS has been working on for the past few years). Neither of those technologies resulted in more software specifically for GNU/Linux.
> RMS was foresighted enough to make licensing a core part of open source. I have a deep respect for the man.
He founded the free software movement. The open source movement is based on different values (convenience above ethics) which Stallman doesn't agree with.
> The Achilles heel for open source software remains to be patents. In that regard I think many proprietary players still have the upper hand.
Both the GPLv3 and Apache solve this problem. The issue is that too many people are using permissive licenses where it's not appropriate.
Ignoring your insults toward RMS, you can always send him an email at rms@gnu.org. He responds within a few days most of the time.
I tend to lately have a Linux server VM on my desktop, and then SSH into it to do linux things... not having to setup a VM is a bonus imho... Being able to run actual docker, better still, as handling the VM's copy of docker images, and making them accessible outside isn't always fun.
In the end, I think it's pretty cool, and scary all at once.. mainly because too many developers take no effort to making their apps work across platforms... not just Windows, but the BSD and Solaris variants.
The company would be worse off if they do not stick to this direction and start backstabbing. It is bad enough that companies are legal persons, let's not anthropomorphize them even more into irrational evil villains that hold grudges.
When files are manipulated through the Win32 subsystem they are (normally) opened in "deny delete" mode (i.e. without passing FILE_SHARE_DELETE to CreateFile and thence to NtCreateFile). But that's Win32 programs and language runtime libraries explicitly setting the sharing mode flags that they like, not an inability of the Windows NT kernel that is below the Win32 subsystem, nor even an inability of the Win32 subsystem itself.
* https://msdn.microsoft.com/en-gb/library/windows/desktop/aa3...
I suspect that you meant some other graphical user interface SSH client, as opposed to a TUI client like ssh. Yes, a TUI SSH client is useful. I've been using one with SFU/SFUA on a Windows 7 Ultimate machine for some years.
> After all these decades, Unix and Linux people are still limited and encumbered by the antique typewriter-mode way of interacting with a computer?
This sword cuts both ways, remember. In many people's minds, the typewriter-oriented way of interacting with computers -- with all of its concomitant problems of multiple incompatible escape code sequence sets, control sequence tearing, terminal mode enquiry from the host end, modal character encodings, modal display, 8-bitness, 7-bitness (!), and of course all of the modem and serial line hoops to jump through -- is something that the world got away from in the 1980s and early 1990s.
The console subsystems in Windows NT, and in OS/2 1.x before it, provided simple manipulation of cursor and attributes without worrying about which escape sequence set to use or without danger of escape sequence tearing. They provided simple enquiry mechanisms for reading characters and attributes back out of the display, and for reading the cursor. There were no worries about "having bit #7 set", or accidentally dropping into "great runes mode". One could use full UCS-2 (this was pre-Unicode 1.1, remember) if one wanted to avoid worrying about code pages. The kernel didn't impose a fixed number of devices or (at least in Windows NT) a low limit on the total number of consoles. The input stream included both keyboard and mouse events in a single machine-readable form, and application softwares didn't have to decode human-readable (sic) protocols for the latter. Keyboard events comprised key press and release information. There were no worries about BPS settings and carrier detect.
* http://homepage.ntlworld.com./jonathan.deboynepollard/Softwa...
* http://homepage.ntlworld.com./jonathan.deboynepollard/Softwa...
All of these were 1980s advances on the state of the art with respect to typewriter-oriented interfaces, and from them in the same decade we got a whole range of TUI programs (even on MS/PC/DR-DOS) whose textual user interfaces did things like incorporate the mouse, draw UI widgets with actual box/line/arrow glyphs, react to modifier keys as they were pressed and released (the most memorable perhaps being a press and release of the [ALT] key activating the menu bar), and save and restore what was displayed "behind" a window/dialogue box. So one should understand the non-Unix non-Linux world's amazement at people who bemoan the lack of systems inferior to even that.
And after that in the 1980s, they will tell you, we gave you the ability to have graphics with the text, multiple fonts, more than 16 colours, a cross-application clipboard, a unified message queue, message passing between different programs, and so forth.
There's another "after all these decades" question that could be shot back, as well.
> After all these decades, it's only in 2011 that the Unix and Linux worlds finally got a workable mouse event protocol for their typewriter user interface? The OS/2 MOU subsystem could handle 16-bit row and column positions, without any of these problems and in the same recognition that consoles were no longer 80 by 25, in 1987!
* http://www.edm2.com/index.php/OS2_API:DataType:MOUEVENTINFO
* http://invisible-island.net/xterm/ctlseqs/ctlseqs.html#h2-Ex...
* http://superuser.com/a/413835/38062
* https://groups.google.com/d/msg/vim_use/lo6PLRUu2Gg/MDcpLf1P...
* http://leonerds-code.blogspot.co.uk/2012/04/wide-mouse-suppo...
There is no "Microsoft cannot do a decent terminal (like we can)" high ground for you to claim.
Then you've failed to take in the point, copiously made in pretty much all of the coverage of this (and often right at the start), that this is not a virtual machine.
This is Windows NT, the operating system designed with "personality subsystems" right from the start, gaining another subsystem that lets it run ELF64 binaries that were compiled to run on top of the Linux kernel.
Windows NT was designed with multiple "personality" subsystems right from the start. The filesystem that any given "personality" sees is not the underlying system that is seen via the "Native API". They are all reinterpreted views of the NT Object Manager's namespace.
* The Win32 subsystem presents a view where C:\ in a Win32 name is mapped to \DosDevices\C:\ in a Native NT name.
* The POSIX subsystem (at least in the later SFU/SFUA) presents a view where /dev/fs/c/ in a POSIX name is mapped to \DosDevices\C:\ in a Native NT name.
* This Linux subsystem (reportedly) presents a view where /mnt/c/ in a Linux name is mapped to \DosDevices\C:\ in a Native NT name.
\DosDevices\C: is, in turn, a Object Manager symbolic link that points to somewhere else in the Object Manager namespace. (There's also a whole mechanism of "per-login" and "global" symbolic links that I'm glossing over.)
Win32 names relative to \ , D:., and . , and POSIX names relative to \ and . , are also things that are handled within the subsystem (sometimes within a set of conspirator language runtime libraries, in fact) that are mapped by that to the NT Object Manager's namespace.
* The Win32 subsystem itself only supports one current directory, the current directory on the current drive. The Win32 subsystem keeps a handle to that current directory open per Win32 process (and stores the handle value in the Process Environment Block in the process' address space) and remembers what string, including drive letter, it was set to.
* The current directory on another drive is a fiction maintained by conspiring Win32 language runtime libraries, using a set of otherwise hidden environment variables. A Win32 name with D:. is mapped using the value of an "=D:" environment variable. See http://unix.stackexchange.com/a/251215/5132 for how this can make a mess with the (Win32) Bourne Again shell running on Cygwin.
* Win32 names relative to a driveless \ use a Win32-subsystem-maintained idea of a current drive letter, derived from the Win32 "current directory" string that is set, and then go through the mapping to \DosDevices\C:\ (or whatever drive letter).
* The (SFU/SFUA) POSIX subsystem doesn't need such conspiracy, as the POSIX model is to have only one current directory, too. So the POSIX subsystem keeps a handle to the current directory open per POSIX process.
* This Linux subsystem (reportedly) presents a view where / in a Linux name is mapped to \DosDevices\C:\Users\Kirkland\AppData\Local\Lxss\rootfs\ in a Native NT name.
* Presumably, this Linux subsystem similarly keeps a handle to the current directory open per Linux process, and remembers its string. After all, it has to present /proc/self/cwd to Linux programs.
Every Windows NT subsystem has filename mapping mechanisms, and they all present their own "views" of the actual native namespace of the NT operating system kernel. This is not a new filesystem. It's another NT subsystem with its own view of the NT Object Manager's namespace, just as the other subsystems have.
* Linux is the part that's being replaced, with the Windows NT kernel and this subsystem.
* The people promoting this are not aiming it at GUI programs. (https://news.ycombinator.com/item?id=11391961)
* The desktop seen on the screen will still be Windows Explorer.
So it's "Linux on the desktop", except in every particular. (-:
Although, as others have already pointed out, it differs in some very significant ways.
OS/2 2.x providing Win16 binary compatibility was an after-market system providing binary compatibility with applications made for the operating system that shipped "out of the box". Whereas this is the operating system that ships "out of the box" providing binary compatibility with applications made for an after-market operating system.
(Yes, yes. One could buy OS/2 pre-installed, and one can buy Ubuntu Linux pre-installed. The scale of that, in both cases, is nowhere near significant enough to change the basic fact that overall the two situations are the reverse of each other.)
Also: There was not the extent of existing tools available natively on both platforms, in the OS/2 case. The examples being waved around in the news now are things like Apache, Ruby, Node, and so forth. There wasn't the OS/2-and-Win16 analogue of (say) the Ruby developers deciding in the months to come that a Win32 port is too hard to maintain, and dropping it in favour of just running the Linux Ruby on the Windows NT Linux subsystem. Today's analogue of the OS/2 case would be a universe where there was no Win32 Ruby at all, and the Ruby developers deciding not to start making a Win32 version because the Linux one "is good enough for the few Windows users".
I suspect that drawing parallels based upon what happened with OS/2 2.x and Win16 is a mistake, and those thinking that this will mean an outflux of Windows development "because it happened with OS/2 2.x" (which was more like an influx of development that failed to happen) are indulging in wishful thinking.
There's also the minor matter that, during the OS/2 2.x and Win16 time, there was this little thing called Windows NT lying around, promising a route for OS/2 1.x, where the existing tools were, with its OS/2 subsystem. (It is ironic that we are once again looking at a Windows NT subsystem.) That has no equivalent this time around at all; unless one mis-casts UbuntuBSD (https://news.ycombinator.com/item?id=11326457) in that rôle. It doesn't really fit, though. "Look, all you people with Ubuntu Linux application softwares. Forget that minority Windows thing that you ported to a couple of years ago. Come bring your applications to this new FreeBSD instead." (-:
This has been up since the turn of the century:
* http://homepage.ntlworld.com./jonathan.deboynepollard/FGA/ca...
The demonstrators in the Microsoft video do warn that they will be avoiding some of the holes of the system in their demonstration. One is very briefly visible at 08'13", before the demonstrator rapidly clears the screen (again), when they run apt-get to install git:
E: Can not write log (Is /dev/pts mounted?) - openpty (2: No such file or directory)
The new Windows NT Linux subsystem apparently doesn't have pseudo-terminals.The old Windows NT POSIX subsystem (the Interix-derived SFU/SFUA one) has pseudo-terminals with both BSD and System 5 access semantics, in comparison.
* https://technet.microsoft.com/en-gb/library/bb497016.aspx
* https://technet.microsoft.com/en-gb/library/bb463219.aspx
Moreover, a Windows console window that is the controlling TTY of a POSIX program in that subsystem has the POSIX cooked input mode with local echo, and generates escape sequences for extended keys.
* https://technet.microsoft.com/en-gb/library/bb463219.aspx#EH...
* https://technet.microsoft.com/en-gb/magazine/2005.05.interop...
* https://news.ycombinator.com/item?id=11391961
* https://blogs.windows.com/buildingapps/2016/03/30/run-bash-o...
I strongly suspect that you will never see systemd working on the Windows NT kernel. Getting systemd to work doesn't just involve supporting the Linux kernel system calls, but also involves getting what those system calls do to work as well. It's all very well supporting open(2), but if one cannot open (say) /proc/self/mountinfo or all of the stuff under /sys or many other things (some listed at http://0pointer.de/blog/projects/the-biggest-myths.html), then systemd might load but it won't run and work.
The same goes for upstart. For example: upstart uses pseudo-terminals for logging. The Windows NT Linux subsystem, according to the Microsoft demo video, doesn't implement pseudo-terminals and returns ENOENT when a program attempts to obtain one.
* http://upstart.ubuntu.com/cookbook/#console
* https://news.ycombinator.com/item?id=11415843
Then there's the fact that the Windows NT Linux subsystem doesn't run Linux programs as the first process in the entire system. That honour goes, of course, to Windows NT's own Session Manager. Both systemd and upstart have "Am I process #1?" checks, and operate in a non-system mode if they aren't process #1.
Which brings us on to the fact that Windows NT already has a mechanism for launching daemons. It already has a Service Control Manager that supervises daemons and that talks LPC to control utilities. It already has a Session Manager that handles initialization, shutdown, and sessions. Ironically, it has had some of the things that are "new" in systemd for roughly a quarter of a century. (But some of those "new" things aren't even new in the Linux and Unix worlds, really.)
It's worth observing that the approach taken by the old Windows NT POSIX subsystem (the Interix-derived SFU/SFUA one) is to run daemons under Windows NT's own Service Control Manager. There is a small shim (psxrun.exe) for ensuring that the Service Manager could run and control the POSIX program, doco on what environment a POSIX program should expect when run under the Service Manager, and an Interix version of the service(1) command that understands Windows NT service management and how to speak to it.
* https://technet.microsoft.com/en-us/library/bb463219.aspx#EH...
* http://systemmanager.ru/svcsunix.en/extfile/portapps/service...
It would be interesting to see how some of the daemontools family of service management toolsets -- such as nosh, runit, perp, daemontools-encore, and s6 -- fared on the Windows NT Linux subsystem. I suspect that one would trip over unexpected holes in the Windows NT Linux subsystem (like setuidgid not working because the underlying system calls return EPERM, perhaps). But I also suspect for several reasons that quite a lot would work. The daemontools family uses FIFOs and ordinary files as the control/status API; is composable and loosely coupled and so doesn't lock everything in to Linux-specific stuff (like specific files in /proc, /dev, or /sys) even if one tool in a toolset might need such stuff and permits that one tool to be replaced or otherwise worked around; and can do service management without demanding to be process #1.
So one could spin up svscan, or service-manager, or perpd, or runsvdir; stub out or comment out invocations of setuidgid or runuid with a dummy program if the Windows NT Linux subsystem didn't support that; similarly stub out or comment out invocations of jexec (for nosh service bundles that use BSD jails) and whatever of ionice, chrt, and numactl (for nosh service bundles that use those) don't work; and probably get quite far.
But the big deal would be spinning service management up outwith a Windows NT login session, so that daemons are actually daemonized. The old Windows NT POSIX subsystem actually has an init process (and an inetd) that can spin up other stuff. I strongly doubt, given the very clearly stated aims, that the new Windows NT Linux subsystem has (or will have) anything similar.
* https://news.ycombinator.com/item?id=11391841
... would you agree with a call for it to be?
I believe that that was also true for AmigaDOS, even the post-Commodore versions, but I don't have the firsthand knowledge to state it unequivocally.
I hope that people here aren't assuming that terminals are all VT100 clones. (-:
The old (Interix) Windows NT POSIX subsystem added escape sequence recognition to Windows consoles when POSIX programs were using them as their terminal output. Such terminals match up with an "Interix" terminal type in the termcap and terminfo databases. Their escape sequence set is not the same as any DEC VT type.
The Dickey ncurses termcap database has an "interix|opennt|ntconsole" entry, although I have encountered systems with ncurses but without this termcap entry and had to add it myself. It is, apparently, wrong. David Given has a different one in LBW.
* https://technet.microsoft.com/en-gb/library/bb463194.aspx
* http://invisible-island.net/ncurses/colored-terminfo.src.htm...
* https://developer.mozilla.org/en-GB/docs/Mozilla/Developer_g...
* https://github.com/sedwards/lbw/blob/master/extras/interix.t...
As (I hope most) Unix/Linux users would respond: let's not.
A good thing about the Unix/Linux world is that there is a multiplicity of shells. And this is regularly taken advantage of. I have systems (for example) where my interactive shell is the Z shell and /bin/sh is the Almquist shell. I have others where /bin/sh is the Korn shell.
The "Shellshock" incident should have told everyone that Bourne Again shell everywhere, for everything, is not a good idea.
In any case, this is not a debate to be won/lost in the first place, as I hope Unix/Linux users would also respond. There are places for shell scripts, places for Python, places for perl, places for awk, TCL, REXX, execlineb, and a whole lot of others. One size, one scripting language, does not fit all.
Indeed, if you do eventually gain all the Ubuntu command-line toolsets from this, your options should broaden to those and more besides, not narrow to the Bourne Again shell.
This is a common misconception. See https://news.ycombinator.com/item?id=11415366 .
> filesystem ACLs are quite different on linux and windows
It's not that simple. Filesystem ACLs are different between TRUSIX-style and NFS-style, too. Try using a TRUSIX-style setfacl on PC-BSD when the volume has been mounted with NFS-style ACLs, some time. (-:
> What will chown and chmod do?
One thing that was noticeable from Microsoft's demonstration video was that everything seemed to be owned by the superuser and have execute permissions. That included a .gitignore and a README.md file.
You are imputing far too much power to Microsoft. It's Linux operating systems that have "legitimized" the Bourne Again shell as an operating system feature, long since and with more clout than Microsoft could wield in this regard.
As such, if Apple can stand up to the "pressure" from a quarter of a century of Linux operating systems, several of which are no longer "bash everywhere" in any case, it can stand up to any pressure that the Windows NT Linux subsystem could possibly exert.
Anyway, "developers who like Unix" hopefully also like its long-standing notion, going back to the 1970s, that there is not only one shell. Thompson, Bourne, Almquist, C, TENEX C, Korn 88, Korn 93, MirBSD Korn, Bourne Again, Z, Debian Policy-compliant, Debian Almquist, Friendly Interactive, Yet Another, ....
Given that this new subsystem is touted as giving developers the means to run Ubuntu toolsets, it behooves us all to look and see what Ubuntu (14.04, the same as in the demo video) actually has when it comes to shells:
In my experience the differing perception of pleasure, certainly with the Interix-derived Windows NT POSIX subsystem rather than the original one, is usually a result of the toolset being BSD rather than GNU. I don't have much trouble with the BSD toolset, myself, especially when switching between Windows and an actual BSD. (-:
In addition to the fact that a number of the "Does this new subsystem ...?" questions are answerable as "No; but the old POSIX subsystem did." (https://news.ycombinator.com/item?id=11416392) the POSIX subsystem has some stuff that we're simply not going to get with a Linux subsystem that has vanilla Linux binaries including libraries right down to the system call level. There are things that only come by adjusting libraries and binaries, because they are above the raw system call level. The POSIX subsystem integrates the user account database access library routines with the Windows SAM, for example. So "ls -l" shows the actual Windows usernames. The POSIX subsystem also comes with a "service" command that understands and can work with the SCM, for another example.
I'd like to see the POSIX subsystem reintroduced. It's a major reason not to use Windows 10.
... but is too outdated to be capable of bootstrapping clang. There's very probably a long chain of bootstraps that would achieve it, but there's not a direct route.
> Will users be able to run their own Linux binaries on Windows?
The answer to that is easily determined from the demo video that Microsoft published. In it, Russ Alexander compiles a program with (Ubuntu binary) GCC and and runs it.
Its implicit int in the declaration of main() was jarring. (-:
I am not particularly a Windows fan either and have been using Linux since the early 90s - the Linux desktop has always been quite adequate for my needs so it is personally hard for me to see why it is difficult for people to adopt Linux on the desktop. The only reasons I can see are sub-optimal driver support, some Windows application that keeps them locked (for me it is OneNote and its support for handwriting), or general fear.