http://www.gaisler.com/index.php/products/ipcores/soclibrary
Also SPARC but with plenty GPL. Has a quad-core, too, with all of them designed to be easily modified and re-synthesized. :)
I'm assuming it'd be expensive, as it doesn't appear anyone's doing it...
http://www.adapteva.com/andreas-blog/semiconductor-economics...
Far as cost, it depends on how you do it. There's three ways to do it:
1. FPGA-proven design done by volunteers that's ported to a Structured ASIC by eASIC or Triad Semiconductor.
2. Standard Cell ASIC that's done privately.
3. Standard Cell ASIC that's done in academia whose OSS deliverables can be used privately.
Option 1 will be the cheapest and easiest. An example of these are here:
http://www.easic.com/products/90-nm-easic-nextreme/
http://www.triadsemi.com/vca-technology/
These are a lot like FPGA's, although Triad adds analog. The idea is there's a bunch of pre-made logic blocks that your hardware maps to. Unlike FPGA's, the routing is done with a custom layer of metal that only includes (or powers) necessary blocks. That lets it run faster, with less power, and cheaper. "Cheaper" is important given FPGA vendors recover costs with high unit prices.
The S-ASIC vendors will typically have premade I.P. for common use cases (eg ethernet) and other vendors' stuff can target it. Excluding your design cost and I.P. costs, the S-ASIC conversion itself will be a fraction of a full ASIC's development costs. I don't know eASIC's price but I know they do maskless prototyping for around $50,000 for 50 units. They'll likely do a six digit fee upfront with a cut of sales, too, at an agreed volume. Last I heard, Triad is currently picky about who they work with but cost around $400,000.
Option 2 is the easier version of real-deal: an actual ASIC. This basically uses EDA tools to create, synthesize, integrate, and verify an ASIC's components before fabbing them for real testing. The tools can be $1+ mil a seat. Mask & EDA costs are the real killer. Silicon itself is cheap with packaging probably around $10-30 a chip with minimum of maybe 40 chips or so. Common strategies are to use smart people with cheaper tools (eg Tanner, Magma back in day), use older nodes whose masks are cheaper (350nm/180nm), license I.P. from third parties (still expensive), or build the solution piecemeal while licensing the pieces to recover costs. Multi-project wafers (MPW's) to keep costs down. What that does is split a mask and fab run among a number of parties where each gets some of the real estate and an equivalent portion of cost. 350nm or 180nm are best for special purpose devices such as accelerators, management chips, I/O guards, etc that don't need 1GHz, etc. 3rd-party license might be no go for OSS unless it's dual-licensed or open-source proprietary. Reuse is something they all do. All in all, on a good node (90nm or lower), a usable SOC is going to cost millions no matter how you look at it. That said, the incremental cost can be in hundreds of thousands if you re-use past I.P. (esp I/O) and do MPW's.
Company doing MPW with cool old node + 90nm memory trick on top:
http://www.tekmos.com/products/asics/process-technologies
Option 3 is academic development. The reason this is a good idea is that Universities get huge discounts on EDA tools, get significant discounts on MPW's at places like MOSIS fabrication service, and may have students smart enough to use the tools while being much cheaper than pro's. They might work hand-in-hand with proprietary companies to split the work between them or at least let pro's assist the amateurs. I've often pushed for our Universities to make a bunch of free, OSS components for cutting edge nodes ranging from cell libraries to I/O blocks to whole SOC's. There's little of that but occasional success stories. Here's two standard cell ASIC's from academia: a 90nm microcontroller and (my favorite) teh 45nm Rocket RISC-V processor which was open-sourced.
http://repository.tudelft.nl/assets/uuid:8a569a87-a972-480c-...
http://www.eecs.berkeley.edu/~yunsup/papers/riscv-esscirc201...
Note: Those papers will show you the ASIC Standard Cell process flow and tools that cane be involved. The result was awesome with Rocket.
So, enough academics doing that for all the critical parts of SOC's could dramatically reduce costs. My proposal was to do each I/O (where possible) on 180nm, 90nm, 45nm, and 28nm. The idea being people moving their own work down a process node could just drop-in replacements. The I/O and supplementary stuff would be almost free so that let's developers focus on their real functionality.
My other proposal was a free, OSS FPGA architecture with a S-ASIC and ASIC conversion process at each of the major nodes. Plenty of pre-made I.P. as above with anyone able to contribute to it. Combined with QFlow OSS flow or proprietary EDA, that would dramatically reduce OSS hardware cost while letting us better see inside.
Archipelago Open-Source FPGA http://www.eecs.berkeley.edu/Pubs/TechRpts/2014/EECS-2014-43...
Note: Needs some improvements but EXCITING SHIT to finally have one!
Qflow Open-source Synthesis Flow http://opencircuitdesign.com/qflow/
Synflow open-source HDL and synthesis http://cx-lang.org/
Note: I haven't evaluated or vetted Synflow yet. However, the I.P. is the only ones I've ever seen for under $1,000. If they're decent quality, then there must be something to their method and tools, eh?
So, there's your main models. Both commercial and academic one might benefit from government grants (esp DARPA/NSF) or private donations from companies/individuals that care about privacy or just cheaper HW development. Even Facebook or Google might help if you're producing something they can use in their datacenters.
For purely commercial, the easiest route is to get a fabless company in Asia to do it so you're getting cheaper labor and not paying for full cost of tools. This is true regardless of who or where: tools paid for in one project can be reused on next for free as you pay by year. Also, licensing intermediate I.P. or selling premium devices can help recover cost. Leads me to believe open-source proprietary, maybe dual-licensed, is the best for OSS HW.
So, hope there's enough information in there for you.
already existed - but apparently not (except for targeting FPGAs as you mention) ?
The analog stuff he mentioned is really tricky on any advanced node. Everything is difficult at least. It all needs good tooling that's had around a billion a year in R&D (Big Three) going back over a decade to get to the point they are. OSS tooling is getting better, esp for FPGA's. However, open-source ASIC's are going to happen with open source development model. Like many great things, they'll be built by teams of pro's and then open-sourceD. Gotta motivate them to do that. Hence, my development models in the other post.
[ed: I'm thinking of things like LEON etc - but as mentioned, and as I understand it, for the ASIC case, maybe not the whole eval board is open. And it's not really in the same ballpark as the dual/quad multi-GHz cpus we've come to expect from low-end hard-ware:
http://www.gaisler.com/index.php/products/boards/gr-cpci-leo... ]
Example of custom design flow http://viplab.cs.nctu.edu.tw/course/VLSI_SOC2009_Fall/VLSI_L...
Note: Load up this right next to the simple, 90nm MCU PDF I gave you and compare the two. I think that you'll easily see the difference in complexity. One you'll be able to mostly follow just googling terms and understand a lot of what they're doing. You're not going to understand the specifics of the full-custom flow at all. Simply too much domain knowledge built into it that combines years of analog and digital design knowledge. Top CPU's hit their benchmarks using full-custom for pipelines, caches, etc.
Example of verification that goes into making those monstrosities work:
http://fvclasspsu2009q1.pbworks.com/f/Yang-GSTEIntroPSU2009....
So, yeah, getting to that level of performance would be really hard work. The good news is that modern processors, esp x86, are lots of baggage that drains performance that we don't need. Simpler cores in large numbers with accelerators can be much easier to design and perform much better. Like so:
http://www.cavium.com/OCTEON-III_CN7XXX.html
Now, that's 28nm for sure. Point remains, though, as Cavium didn't have nearly the financial resources of Intel despite their processors smoking them in a shorter amount of time. Adapteva's 64-core Epiphany accelerator was likewise created with a few million dollars by pro's and careful choice of tooling. So, better architecture can make up for the lack of speed that comes from full-custom.