def foo():
if n := randint(0, 3):
return n ** 2
return 1337
[(x, y, x/y) for x in input_data if (y := f(x)) > 0]
"This is a proposal for creating a way to assign to variables within an expression using the notation NAME := expr."
Hence the former can be used in contexts like "if x := 10: pass", which is the whole point of the PEP.
E.g.
if(x:=f() is not None):
print(x)
You can read more about it here: https://www.python.org/dev/peps/pep-0572/ match1 = re1.match(text)
if match1 is not None:
do_stuff()
else:
match2 = re2.match(text)
if match2 is not None:
do_other_stuff()
Which is a bit clunky. you only want to evaluate match2 in case match1 fails, but that means a new level of nesting. Instead, with this proposal, you could do this: if (match1 := re1.match(text)) is not None:
do_stuff();
elif (match2 := re2.match(text)) is not None:
do_other_stuff()
Evaluate and assign in the if-statement itself. This is not dissimilar to the equals operator in C. In C, you would frequently find loops like
`while ((c = read()) != EOF) { ... }`. This would presumably allow a similar pattern in python as well.More information can be found in PEP-572: https://www.python.org/dev/peps/pep-0572/
This comment is false. It should say
x = y = z = 0 # Equivalent: (z := (y := (x := 0)))
"...assigns the single resulting object to each of the target lists, from left to right." https://docs.python.org/3/reference/simple_stmts.html#assign...Here is a demonstration of the difference:
>>> class Node: pass
...
>>> node = blue_node = Node()
>>> red_node = Node()
>>> node = node.next = red_node
>>> blue_node.next is red_node
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: Node instance has no attribute 'next'
Short version.
(x =: y) is an expression that:
1. assigns the value y to the variable x
2. has the value y.
So `print((x := 1) + 1)` prints '2', and sets x=1.
A ton of languages [eg: c, js] have '=' work this way. And a ton of style guides for those languages tell you to avoid using it like that, because it's confusing. So this is a bit controversial.
if ((int test = my_func_call()) == BLAH) { do_foo(); }
EDIT: also, I'm mentioning "in your opinion" because adding that to your own statements indicates that you're open to discussion. It's also good to use it as a reminder to yourself (speaking as a former physics student, and most people who know physics students will agree how "absolute" and unintentionally arrogant they tend to be in their claims until they learn otherwise).
I'm sure your coding experience was frustrating, and I'm sorry to hear that, and I understand that we all need to vent sometimes, but trying to staying open to other viewpoints is better for your own sanity, wisdom, and social connections in the long run.
while (bytes := io.get(x)):
and: [bar(x) for z in stuff if (x := foo(z))]
Every time Python adds an expression counterpart to an existing statement (lambdas, intensions, ternary...) there is a (legit) fear it will be abused.But experience tells that the slow and gradual pace of the language evolution combined with the readability culture of the community don't lead that way.
While we will see code review breaking materials in the wild, I believe that the syntax will mostly be used sparingly, as other features, when the specific needs arise for it.
After all, it's been, as usual, designed with this in mind: "=" and ":=" are mutually exclusive. You don't use them in the same context.
The grammar makes sure of it most of the time, and for the rare ambiguities like:
a = b
vs (a := b)
The parenthesis will discourage pointless usage.My bet is that we will see essentially rare but useful and expressive use cases in productions, which is exactly the goal.
Given the month of debates around this, I think it's a fine compromise.
Like many, I would have preferred the use of the "as" keyword instead of a new operator, since it's already used to bind things to names in imports, context managers and try/except.
However, the new syntax has 2 advantages: it reads the same way than the original operator, and it supports type hints out of the box.
For example, import mod is NOT defined as
mod = eval(open("mod.py").read())
but involves abstract load module operation, which is dependant on the environment.That's why := is just syntactic sugar; there are no new semantics.
Is there a better example of something that is generally agreed to be "shitty" yet could be fixed in a clean way, without breaking backwards compatibility?
https://en.m.wikipedia.org/wiki/Zen_of_Python
For anyone who hasn't read the original PEP link, what do you suppose this does? Guess its meaning:
1.
if (row := cursor.fetchone()) is None:
raise NotFound
return row
2.Next guess the meaning of this - what does it look like it does?
row = cursor.fetchone()
if row is None:
raise NotFound
return row
Answers:The first does the second. The second doesn't need explanation. I literally don't need to tell you what it does.
(The two examples are from the link.)
Now you might think that this means I'm against this PEP.
But actually [EDIT: because I know there is a VERY high bar to being accepted into Python] to me it means this is going to save an incredible amount of time - it must be very good indeed to be accepted.
So if they decided to adopt it - sure, I'll use it. And I bet it's great in practice.
It's also super explicit. If you don't know what := does you will go look it up.
If Python is still driven by Python design philosophy and all that discussion, then this will be very helpful. It surely had a very high threshold to meet.
This PEP solves very little problem, saves a few characters of code, but adds complexity to readability.
The sugar is sprinkled on top of syntax, the stuff the parser deals with. Typing a += 1 instead of a = a + 1 is sugar because it parses the same. This assignment syntax seems different. IMHO.
One main reason in my opinion as to why Python 2.7 stuck around for so long was that everyone agreed that it was the last stable version of Python 2, and the devs could make Python 3 changes without worrying significantly about backward compatibility. I've been using Python3 since Python 3.3 but have had to write code that was Python 2 compatible for large code bases for about 5 years because I knew colleagues / users of my packages would not upgrade. This meant using a select subset of features in Python 3 that had been backported to a version of Python 2.7 that I knew I could get my colleagues / users to upgrade to. It has been great to watch the language evolve and Python 3 really gets a lot of things right, but adding breaking syntax features every minor release is extremely annoying. I have to have a mental checklist of all the different possible subsets of features I have access to given a minimum version of Python I want to support. I've spent the last couple of years in my professional career converting large Python 2 only code bases to Python 3.3+. But if someone wants to use async, I need to bump the minor version. If someone wants to use type hinting I have to bump the minor version. If someone wants to use f-strings I have to bump the minor version. If someone wants to use data classes I have to bump the minor version. It's nuts (to me anyway)!
This sounds rant-y but I genuinely want to know what other people are doing to mitigate this. I love Python! I have spent years advocating its merits. But thinking about large codebases in Python just worries me and the future doesn't look promising here. Are Python developers hoping that once 2020 arrives they'll have a stable Python release and work on Python 4. We'll have the Python 2/3 split all over again!
My personal opinion is that Python needs an officially maintained translator for the latest Python features back to older versions. My current favorite package is py-backwards [0] but it is rather unmaintained at the moment.
- libs not available for Python 3 (this is a big problem for some people)
- codebase work and don't need/want any evolution Python 3 provide
- fear of Python 3 being unstable (that was legitimate during the first few Python 3 versions, not that much anymore)
There are probably more.
Just leave it to those who have used Python for a while now and actually know what it's missing.
It’s still pity that Python 3 isn’t more adjusted to that fact.
I don't think that's right; what expression/statement is `x := y` equivalent to? I'm thinking in particular about using mutable collections to emulate assignment in a lambda, e.g.
>>> counter = (lambda c: lambda: (c.append(c.pop() + 1), c[0])[1])([0])
>>> counter()
1
>>> counter()
2
>>> counter()
3
It looks like this could now be done as: >>> counter = (lambda c: lambda: (c := c + 1))(0)
Yet the semantics here are very different: one is pushing and popping the contents of a list, without changing any variable bindings (`c` always points to the same list, but that list's contents changes); the other has no list, no pushing/popping, and does change the variable bindings (`c` keeps pointing to different integers).Maybe it's equivalent to using a `=` statement, but statements are forbidden inside lambdas. Maybe the lambdas are equivalent to `def ...` functions, but what would their names be? Even if we made the outer one `def counter(c)...` the resulting value would have a different `func_name` (`counter` versus `<lambda>`).
Even the `if` examples that are scattered around this page don't seem to have an equivalent. For example:
if (x := foo() is not None):
do_something()
We can't "desugar" this, e.g. to something like the following: x = foo()
if x is not None:
do_something
The reason is that we're changing the point at which the binding takes place. For example, Python guarantees to evaluate the elements of a tuple in left to right order (which we exploited in the above push/pop example). That means we could write: if (sys.stdout.write(x), x := foo() is not None)[1]:
do_something
This will print the current value of `x`, then update `x` to the return value of `foo()`. I can't think of a way to desugar this which preserves the semantics. For example, using the incorrect method from above: x = foo()
if (sys.stdout.write(x), x is not None)[1]:
do_something
This isn't equivalent, since it will print the new value of `x`. Maybe we could float the `write` call out of the condition too, but what about something like: if foo(x) and (x := bar()):
do_something
We would have to perform `foo(x)` with the old value of `x`, store the result somewhere (a fresh temporary variable?), perform the `x = bar()` assignment, reconstruct the condition using the temporary variable and the new value of `x`, then `del` the temporary variable (in case `do_something` makes use of `locals()`).PS: I think this `:=` is a good thing, and writing the above examples just reminded me how infuriating it is when high-level languages distinguish between statements and expressions, rather than having everything be an expression!
The use of "=" for assignment has long been a pet peeve of mine. It was a mistake when C did it, and it's been a mistake for so many subsequent languages to copy it.
"=" shouldn't be an operator at all, it makes a lot more sense to use ":=" and "==".
Pascal's use of ":=" for assignment and "=" for equality, strikes me as almost as clear.
Still, at least C makes consistent use of '=' for assignment, unlike that god-forsaken trainwreck of a language, VB.Net, which uses it for both assignment and for equality depending on context.
Which shouldn't be that surprising considering originally Netscape were going to port Scheme to their browser before choosing to create a new scripting language with "Java-like syntax" (you can argue amongst yourselves just how Java-like the syntax really is).
- Instantiating an object from a name in a string. Like instantiate a 'Foo' when you have a string variable that contains 'Foo'. I can't remember the syntax even though I looked it up two days ago, and I never will because it's such a shit show. Not to use PHP here as and example of a great language, but there at least the intuitive '$obj = new $var' works as you expect it. Or, in C++ you have to do it manually, which is also fine - at least be consistent.
- The weird sort-of typing of variables. Variables have types, but you can assign a different value of a different type to them, and the actual types usually doesn't matter except when it does. So you do print "Hey " + var but now you need to know what type var is because you might need to str() it.
- The whitespace-is-important-except-when-it-isn't. OK so braces are the devil's work, but when it's inconvenient, we're not that strict on white space (when initializing lists, when having expressions that span several lines, ...) so now everything can still look out of whack.
- .iteritems(). Really?
- super(ClassName, self).__init__(argument). Wut? Yes when I parse it token by token I understand, but why? Maybe the other magic methods are in this category too, but probably to a lesser degree.
- (I had some other things here about primitive OO capabilities, shitty package system/versioning, and some more, but those were all so far away from 'syntactic sugar' that they didn't fit this list no matter how hard I twisted the argument)
Look, I do understand why they are this way. For each of them, there is a reason that any reasonable person would say 'yeah that makes sense' to, possibly after some explanation of the history or context or whatever. But then at least be honest and stop promoting the language as so 'intuitive' or 'beginner-friendly' or 'much more clean than other languages'. Sure, it's not as bad as R, but it's still just like any other 20+ year old language in wide spread use - crufty, idiosyncratic in many respects, and in general requiring a bunch of frustrating head butting before you can be productive in it.
And to tie it to the OP - it seems this new syntax is promoted as being for 'beginners' or to make it 'easier to teach'. Well good luck with that, I say.
if (match := re.match(r1, s)):
o = match.group(1)
elif (match := re.match(r2, s)):
o = match.group(1)
if foo := bar[baz]:
bar[baz] += 1
return foo
else:
bar[baz] = 1
return 0
Where foo is a dict keeping track of multiple things, and a non-existing key (baz) is never an error but rather the start of a new count. Faster and more readable than if baz in list(bar.keys()):
....
Similar to Swift’s ‘if let’, it seems.(as to your first point, if you'll allow me to be even more snarky and cynical as I already have been in this thread (might as well go all out now), the fictional 'pep' I would like to see most is 'method and apparatus to instill some realism and humility in the average Python advocate's conception and description of the language'. But here too I will freely admit that I'm probably susceptible to significant observation bias and/or bad luck, and that others could have radically different experiences from myself.)
Smalltalk also uses ":=" for assignment and "=" for comparison. In Pharo, VA and Dolphin at least does what this Python proposal does - return the value of the last expression.
You could, but that would turn "syntactic sugar" into a useless phrase with arbitrary meaning.
The phrase "syntactic sugar" is usually reserved for language constructs which can always be rewritten, in-place, to some other construct in the same language, such that the semantics is identical (i.e. we can't tell which construct was used, unless we parse the contents of the file).
Python has examples like `foo += bar` being sugar for `foo = foo + bar`.
As an aside, your mention of "machine language" implies the use of operational semantics. That's where we say the "meaning" of a program depends on what it does to the machine when executed. That's fine, but it's not the only approach to semantics. In particular denotational semantics defines the meaning of a program by giving a meaning to each syntactic element of the language and their combinations, usually by rewriting them into some other, well-defined language (e.g. set theory). I much prefer denotational semantics, since it lets me 'think in the language', rather than making me 'think like the machine'.
You could do this, although it's admittedly uglier than your example:
In [1]: sorted(enumerate([('b', 1), ('c', 3), ('a', 2)]), key=lambda x: (x[1][0], x[0], x[1][1]))
Out[1]: [(2, ('a', 2)), (0, ('b', 1)), (1, ('c', 3))]
However, if you're flexible about the ordering of the resulting tuples, this seems clearer and reasonably painless: In [1]: sorted((x, i) for i, x in enumerate([('b', 1), ('c', 3), ('a', 2)]))
Out[1]: [(('a', 2), 2), (('b', 1), 0), (('c', 3), 1)]
I know that doesn't address your underlying complaint. This is mainly to note that the flexibility of Python tends allow a variety of approaches and that sometimes finding the clearest one takes some effort. ("There should be one obvious way to do it..." often does not hold, IMHO.) for n in range(100) if n%2:
print(f'{n} is odd number')
Does anyone know if there is a PEP covering that?[ {X, Y, X/Y} || X <- Some_Function (), Y <- Some_Other_Function () ]
And people bitch about Erlang syntax.
Edit: "/" is the division operator
try:
bar[baz] += 1
except KeyError:
bar[baz] = 1
Also you can check if a key is in a dict simply by doing "if baz in bar" no need for "list(bar.keys())", which will be slow (temp object + linear scan) vs O(1) hashmap lookup.On the other hand, you have a concept which translates easily to any language with first-class functions and lambdas. Even the syntax stays the same among languages which use "f(x,y)" for function evaluation and parameter passing.
/* This post is for those occasions when a list comprehension style is advocated over a functional style, which I know was not necessarily what you were doing in your comment. But I think the two points are valid enough on their own. */
for n in range(100):
if n%2:
print(f'{n} is odd number')
Only two more indents. What is the point of your proposed syntax? for n in filter(is_even, range(100)):
print(f'{n} is odd number')
for n in (i for i in range(100) if i % 2 == 0):
print(f'{n} is odd number')
Are there any points against these solutions other than verbosity? for n in range(100):
if n%2:
print(f'{n} is odd number')
You proposal also leads to a more ambiguous grammar because the following is currently allowed: for n in range(100) if n%2 else range(n):
The ambiguity can be extended with multiple if's, compare: for x in range(10) if n%2 if n else range(n):
for x in range(10) if n%2 if n else range(n) else n**2:
A work-around would be to raise something akin to the "SyntaxError: Generator expression must be parenthesized if not sole argument" that occurs with expressions like "f(b, a for a in range(3))", but that's a lot of work just to save a newline, two indents, and ":", isn't it?Code starts becoming a lot harder to reason about when more than one state is mutated on the same line. The good design of Python makes this harder than in say C and I think this is a step in the wrong direction in that regard.
The two real things this solves are checking for truthyness in an if and reusing values in a filterting comprehension. Instead of the syntax we have now that can be used anywhere, adds a whole new concept and feels kind of out-of-place, I would have much preferred a solution that can only be used in vetted places, doesn't add a new thing people need to learn and follows the style of the language
For example, my preferred solution for `if` would have been:
if thing() as t:
print(t)
Usage of `as` is already established by the `with` block [value for x in y
if value
where value = x * 2]
The order is unfortunately a bit weird here, but there is no need to add the whole concept of a different type of assignment and this syntax will feel instantly recognizable to people familiar mathematical notation, which is where the existing list comprehension syntax comes from and so has been established as well.That doesn't seem possible (see my sibling comments). You might be able to write a different program, which might be similar (e.g. same return value, most of the time), but I don't think there's anything that's equivalent.
This is an important distinction! For example, let's say you're given a program that uses a lot of `x := y` expressions. You're asked to back-port this to an older Python version, which doesn't have `x := y`. What do you do? If there's an equivalent expression, you can just swap them out; you could even automate it with a keyboard macro, since there's no need to think about it.
If, on the other hand, you only know how to write similar code, you can't be as confident. Some examples of where "similar" programs can end up behaving differently are:
- The application makes heavy use of threading
- There are lots of magic methods defined, like `__getattribute__`, which can alter the meaning of common Python expressions (e.g. `foo.bar`)
- Those magic methods cause global side effects which the program relies on, so that they have to get triggered in the correct order
- The program manipulates implementation features, like `locals()`, `func_globals`, `__class__`, etc.
- The software is a library, which must accept arbitrary values/objects given by users
- It makes use of hashes, e.g. to check for data in an existing database, and those hashes may depend on things like the order of insertion into internal properties
Whilst it's perfectly reasonable to curse whoever wrote such monstrous code, that doesn't help us backport it. We would have to tread very carefully, and write lots of tests.
> I'd prefer more lines for readability reasons
Verbosity and readability are not the same thing. Overly verbose code might have easier to understand parts, whilst obscuring the big picture of what it's actually doing. A classic example is assembly: each instruction is pretty easy, e.g. "add the value of register A to register B", "jump to the location stored in register C if register B is non-positive", etc. Yet we can pluck a page of disassembled machine code from, say, the middle of LibreOffice and have no idea what problem it's meant to be solving. (I posted a rant about this at https://news.ycombinator.com/item?id=16223583 ).
while continue = "yes":
instead of: while continue == "yes":
Those mistakes introduce bugs that are hard to spot because they don't cause an immediate error, linters can hardly help with them and even a senior can make them while being tired.Nitpick, but I don't think that's true – AFAIK they translate into different method calls.
`foo + bar` →`foo.__add__(bar)`
`foo += bar` → `foo.__iadd__(bar)`
(note the `i` in the second one) m = re.match(r'\s*(\d+)\s*', args)
if m:
start = int(m.group(0))
end = start + 10
m = re.match(r'\s*(\d+)\s*,\s*(\d+)\s*', args)
if m:
start, end = map(int, m.groups())
With the new syntax this becomes: if m := re.match(r'\s*(\d+)\s*', args):
start = int(m.group(0))
end = start + 10
if m := re.match(r'\s*(\d+)\s*,\s*(\d+)\s*', args)
start, end = map(int, m.groups())
This pattern occurs just often enough to be a nuisance. For another example drawn from the standard library, here's modified code from "platform.py" # Parse the first line
if (m := _lsb_release_version.match(firstline)) is not None:
# LSB format: "distro release x.x (codename)"
return tuple(m.groups())
# Pre-LSB format: "distro x.x (codename)"
if (m := _release_version.match(firstline)) is not None:
return tuple(m.groups())
# Unknown format... take the first two words
if l := firstline.strip().split():
version = l[0]
if len(l) > 1:
id = l[1]
So, my reasoning is that I surely must end up liking it, if it ended up accepted despite breaking every rule in the book. They wouldn't accept it for something that wasn't incredibly useful. My first impression is that it's incredibly ugly, unpythonic, and unreadable (what does it even do?), and this is pretty "obvious".
So arguing based on what I know about Python design philosophy, and the fact that this has been so deeply discussed, it must be absolutely fantastic.
I wouldn't think it's great (if you gave me the syntax), I'd recite all the Python Values it breaks -
https://en.m.wikipedia.org/wiki/Zen_of_Python :
- Beautiful is better than ugly.
Nope, this is very ugly.
- Simple is better than complex.
Nope, this isn't simple.
- Flat is better than nested.
Nope, this explicitly exists to let you nest something.
- Sparse is better than dense.
Nope, this explicitly exists to make code denser.
- Readability counts.
Nope: this is literally not readable. You don't know what the first code example I gave does. Normally with Python you just "write pseudocode and make sure you indent it properly."
- Special cases aren't special enough to break the rules.
This is a special case.
but here we get to the next rule, which trumps ALL of the above:
Although practicality beats purity.
- There should be one—and preferably only one—obvious way to do it.
This is broken, as now you can do it on two lines or one line.
- If the implementation is hard to explain, it's a bad idea.
It's kind of hard to explain, look at all our comments.
- If the implementation is easy to explain, it may be a good idea.
This isn't the case here.
I will grant that this is extremely explicit syntax. In this sense it is MUCH better than overloading = based on different contexts or something. As I mentioned above, anyone who sees := knows that it's a "known unknown" -- WTF is this?
So we are left with a single solitary Pythonic value:
- Practicality beats purity.
Based on this alone, I reason that it must be great to have been accepted. It must be extremely practical. It must save a lot of time.
I am quite disposed to thinking (honestly!) that it must be pretty darn great. It will save a lot of time.
There's no way that it made it into the language if it didn't, since it's so obviously ugly and unpythonic.
But practicality beats purity. I'm sure I'll love it!
Python chose a different design trajectory - personally I can't stand it but it certainly follow some sort of internally consistent reason.
match = re.match(r1, s)
if match:
o = match.group(1)
else:
match = re.match(r2, s)
if match:
o = match.group(1)
or a bit shorter: match = re.match(r1, s)
if not match:
match = re.match(r2, s)
if match:
o = match.group(1)
You could also just loop: for pattern in (r1, r2, ...):
match = re.match(pattern, s)
if match:
o = match.group(1)
break
else:
do_failure_handling()
But this goes a bit beyond the original question.First of all you have indented 1 and False equally. Is that a typo? Or is it your opinion that the if should always consist of the if and the else branch without using the else keyword?
Secondly, if you want to return a value you need to use the return statement.
Also you wrote bar(foo) but foo was the name of the function, not the name of your parameter.
Perhaps what you are looking for is this:
def foo(x):
return 1 if bar(x) else False
The problem is that if you add blocks, then half of the added syntactic features the last decade is redundant as a block version would simply solve the problem a better. Which would create a lot of dead design, and that makes it a bad solution.
My point is that it would be nice to have a consistent syntax for all for-loops, either being a part of a comprehension or standing on their own.
EDIT:
> You proposal also leads to a more ambiguous grammar because the following is currently allowed:
for n in range(100) if n%2 else range(n):
Not really, I gives me "NameError: name 'n' is not defined". Unless it is an 'n' defined in the outer scope, of course.Additionally, Python3 is only really usable for the more professional audiance since 2-3 years ago. Before that they were busy with stabilizing Python3, porting popular libs, portling less popular libs, or creating replacements for dead libs.
Just for fun, this seems to work:
(locals().pop('x', None), locals().setdefault('x', y))[1]
Python doesn't have specific syntax for that. It can be as simple as:
obj = globals()["Foo"]()
That assumes "Foo" is in your global namespace. If you don't care about security then you can do: >>> import math
>>> s = "math.cos"
>>> eval(s)(3)
-0.98999249660044542
If you care about security then you might not want to allow arbitrary objects, like "os.unlink" to be referenced. There are third-party packages which provide different models of how to get objects, like Django's "import_string" at https://docs.djangoproject.com/en/2.0/ref/utils/#django.util... ."The weird sort-of typing of variables. Variables have types,"
Variables have only one type, "reference to Python Object". An expression like 'var = "Hey " + var' may change the type of the value that var references, just like how 'var = var * 234.567' may change the type of the value that var references from an integer/long to a float, or "var = var * "Hey"', if var == 2, causes var to be a reference to the string "HeyHey".
".iteritems(). Really"
This was for a transition phase. It no longer exists in Python 3, where items() returns an iterator instead of a list.
"super(ClassName, self).__init__(argument). Wut?"
In Python 3 this is: "super().__init__(argument)", as in:
>>> class A:
... def __init__(self, s):
... print("A says", s)
...
>>> class B(A):
... def __init__(self, t):
... super().__init__(t*2)
...
>>> B("hello?")
A says hello?hello?
<__main__.B object at 0x10b201630>
"but it's still just like any other 20+ year old language in wide spread use - crufty, idiosyncratic in many respects"A reason for the oft-bemoaned backwards-incompatible changes to Python 3 was to remove some of the crufty, idiosyncratic language features that you rightly pointed out. You are still using Python 2.7, so cannot take advantage of those changes.
A bit poorly. Compare:
>>> f(1, 2 for x in )
File "<stdin>", line 1
f(1, 2 for x in )
^
SyntaxError: invalid syntax
>>> f(1, 2 for x in r)
File "<stdin>", line 1
SyntaxError: Generator expression must be parenthesized if not sole argument
See how the first one gives the location of the error while the second does not? As I recall, this is because the first can be generated during parsing, while the second is done after the AST is generated, when the position information is no longer present.That's why the following:
>>> f(2 for x in X) + g(1, 2 for y in Y) + h(z**2 for z in Z)
File "<stdin>", line 1
SyntaxError: Generator expression must be parenthesized if not sole argument
doesn't tell you which generation expression has the problem.Yes, I meant that if 'n' is defined in an outer scope. The expression I gave is not a syntax error but a run-time error.
(define (foo x)
(if (bar x)
1
#f))
The returned value is the value of the last expression. No need for an else, or a return keyword.Before there were exceptions `if ($result = query())` was the cleanest error-checking approach.
Also PHP didn't invent that syntax in any way, but followed C (in the end early PHP was a simplified C created by a C programmer) it only later git higher level constructs making this less needed.
So at least according to Wikipedia it is a guide. Of course, whether it's applied in practice is a different matter.
In the Python version, the new "(a := 1)" will stand out compare to the canonical "a == 1" as the ':' and '()' are unexpected here, and scream that it's not regular if test.
It's a condensed definition of the philosophy and goals which went into python at that point. A guide to define the borders of the road to good python, but also a guard against those who wanna change the direction of this road.
First class citizen functions, short lambdas, comprehension lists, generators, map(), filter(), itertools, operator and functools are quite a rich toolbox already. But you won't have more. It's a choice.
The idea is to have enough to be productive, and not enough to be dogmatic. The experience of Guido, and it's one that I share, is that too much functional tooling drives a style that favors expressive writing at the expense of ease of reading.
It's not by chance that LISP and Haskell are considered hard languages to get into, while Python is considered easy to start with.
It has a cost, since no language is perfect, but that's the path this language follows and requesting a snake to fly will only bring you disappointments.
Python tries to strike the balance between the importance of a rich expressiveness and the non negotiable necessity of keeping the code readable: you read a line much more often that you write it, after all. It's a key philosophy of the language. It shaped and will shape numerous decisions around it.
This PEP is a perfect example : it tooks years for the concept to be integrated in Python, and the last debate about this concrete implementation took months. The result is a carefully crafted feature with a lot of details to discourage abuse and remove the needs for pondering when to use it or not.
Does not. One is addition, the other is in-place addition; they're different things and can behave differently. E.g. in "a += b" and "a = a + b", the former might not construct an intermediate object, but mutate the existing a.
It’s fine like that in Scheme and the other Lisps in part because well that’s the way they always did it, but it’s quite different from how it is and has been in Python.
If they want Lisp in Python they should look into Hy.
With 2 vs 3 there were a large number of breaking changes, so effort spent adding new features to the 2.x lineage made sense. But if you're not able to get someone to update from 3.6 to 3.7 for dataclasses, is a 3.6.7 that supports them [but doesn't reserve 'async'] likely to be any different?
> Are you using features that are only available in the version of Python all your colleagues / users have access to? Are you using the latest version of Python and hoping your colleagues and users can upgrade to that version of Python?
If you have access our software, you have access to http://python.org -- that's never the issue.
For colleagues... everyone has 3.6.4-3.6.6 on their work computers unless they've actively prevented it.
We don't have anything using python for external users. If we did, I wouldn't want to assume that they already have python installed at all. I'd be inclined to look into PyInstaller for distributing that, and then we could depend on whatever version we wanted.
What do you mean by "intension"? I haven't seen this referred to in python and googling isn't turning out anything useful (also spelling it as "intention").
The worst for me is OOP. In french it's POO, which, for obvious reasons, does not inspire confidence in an english reader.
>>> class A(object):
... def __getattribute__(self, attr):
... if attr == "__add__":
... return lambda *_: "hello world"
... return None
...
>>> a = A()
>>> a.__add__(A())
'hello world'
>>> a + A()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for +: 'A' and 'A'
if (match := re.match(r1, s)):
o = match.group(1)
# plus some code here
elif (match := re.match(r2, s)):
o = match.group(2)
# plus some other code here
In this case only your first solution works, I think. Leaving aside that having those deeply nested ifs is incredibly ugly, I find it hard to accept that something which completely changes the possible structure of the code is just "syntactic sugar".Take a more familiar example:
x, y = (y, x)
Let's pretend that this is "just sugar" for using a temporary variable. What would the desugared version look like? As a first guess, how about: z = (y, x)
x = z[0]
y = z[1]
del(z)
This seems fine, but it's wrong. For example, it would break the following code (since `z` would get clobbered): z = "hello world"
x, y = (y, x)
print(z)
A temporary variable would need to be "fresh" (i.e. not clobber any existing variable). As far as I'm aware, there's no syntax for that in Python. What we can do is create a fresh scope, so that the temporary variable would merely shadow an existing binding rather than overwrite it. We can do that with a lambda and the new `:=` syntax: (lambda z: (x := z[0], y := z[1]))((y, x))
However, this alters the semantics because the stack will be different. For example, we might have a class which forbids some attribute from being altered: class A(object):
def __init__(self, x):
super(A, self).__setattr__('x', x)
def __setattr__(self, name, value):
if name == "x":
raise Exception("Don't override 'x'")
return super(A, self).__setattr__(name, value)
This will raise an exception if we try to swap two attributes: >>> a = A('foo')
>>> a.y = 'bar'
>>> print(repr({'x': a.x, 'y': a.y}))
{'y': 'bar', 'x': 'foo'}
>>> a.x, a.y = (a.y, a.x)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 6, in __setattr__
Exception: Don't override 'x'
If we replace this with the lambda version above, the exception will have a different stack trace, which we can catch and process in arbitrary ways. For example, maybe we know that the `foo` function will trigger these exceptions when given `A` objects, but it's a recoverable error. So we "ask for forgiveness instead of permission" by catching these exceptions somewhere, looking checking the stack trace to see if the Nth stack frame is `foo`, and abort if it wasn't. If we "desugared" using the above lambda, the Nth stack frame source of the exception would be a different function (`<lambda>` instead of `foo`) and hence such a program would abort.On the one hand, that's a pretty crappy program. But on the other it demonstrates that "use a temporary variable" is not "easy" in the general case (which is what language implementations must handle).
Are you maybe conflating blocks with chained iterator operations? Adding blocks to Python's current functional syntax would be pretty ugly. "foo.filter(bar).map(baz)" is nice even if you can only use named functions.
> You need to use a temporary variable but then your example is easy.
Yes this example, of `if foo(x) and (x := bar()):`, would be easy with a temporary variable. But there are infinite variations we can make:
if foo(x) and (x := bar()):
if foo(x) or (x := bar()):
if (x := baz()) and foo(x) and (x := bar()):
if foo(x, y) and (x := bar()) and baz(x) and (y := quux()):
...
I fail to see how something is "just sugar" when desugaring it seems to require implementing a general-purpose compiler from "Python" to "Python without ':='".But the overall question is: when is the sugar just syntactical, and at what point does it become a complete new taste?
I'm thinking it could look like this:
import numpy as np
def M1 %*% M2 with same precedence as *:
return M1.matmul(M2)
foo_matrix = np.matrix([[1,1],[1,1]])
bar_matrix = np.matrix([[2,2],[2,2]])
print(foo_matrix %*% bar_matrix)
Also, it would be nice to have a pipe operator `%>%` such that foo %>% f()
is equivalent to f(foo)
The alternative is to make f a method of foo, and then you can write foo.f()
But what happens if I don't want f to be a method? I just want the style of writing the f after the foo, but I don't want the baggage of OOP. Is that anti-Pythonic?I think anything general would probably be considered as cluttering up the grammar or non-pythonic. Completely custom unicode operators was something I loved about Swift (open to abuse, but really useful in moderation).
In toy examples, you can see all that because they'll show the declarations.
Here's a case[1] trying to explain monads by showing a simpler example:
parseP5_take2 s =
matchHeader (L8.pack "P5") s >>?
\s -> skipSpace ((), s) >>?
(getNat . snd) >>?
skipSpace >>?
\(width, s) -> getNat s >>?
skipSpace >>?
\(height, s) -> getNat s >>?
\(maxGrey, s) -> getBytes 1 s >>?
(getBytes (width * height) . snd) >>?
\(bitmap, s) -> Just (Greymap width height maxGrey bitmap, s)
Is all that left or right associative?Generally, to read an expression when you have custom operators, you have to dig up the associativity and precedence. That's why I see many newer libraries avoiding custom operators, [2] vs [3].
I think you can do custom operators in a language, you just need to require that the associativity and precedence be declared in import. I'd also have named levels of associativity, so maybe:
import MyModule.Foo.Bar (op(!$!, additive, right), ...)
Now, that's some boilerplate in your imports, but for readability, it'd be huge. And automated tools can tweak stuff like this.[1] http://book.realworldhaskell.org/read/monads.html
[2] http://hackage.haskell.org/package/pretty-1.1.3.6/docs/Text-...
[3] https://hackage.haskell.org/package/prettyprinter-1.2.1/docs...
# Share a subexpression between a comprehension filter clause and its output
filtered_data = [y for x in data if (y := f(x)) is not None]
What about filtered_data = [(y := f(x)) for x in data if y is not None]
will that work also?Attempted guess at answer: no, because the "gathering" part of the comprehension is downstream of the "sequence construction" part of the comprehension in the language implementation. But if that's so, I'm a bit concerned that this will be confusing to users.
'foo' if foo else 'bar'
"__iadd__" and "__add__" can do whatever they want.
x = open('filename')
x.closed # False
with x:
print(x.readline())
x.closed # True
I think you're right. I prefer the `as` variant for readability.Things have gotten much better, thankfully. Python 3 (and 2) have been tweaked over the years to make it much easier to write code that will work on both Python 2 and 3, and to make it easier to tweak existing code so that it will work on both. As a result, it's possible to transition code a file at a time, or even a portion at a time, instead of the impossible "all at once" transition. Almost no one used Python 3 after it was released, or for a number of years later. Now that the developers of Python have started to make transition practical, people have started transitioning.
Still, it takes real effort to transition to Python 3, and many people have "real work" to do instead of transitioning language versions without any real benefit. "3 is larger than 2" is not a real benefit. A real benefit is something like "this transition will radically increase performance" - and no one is claiming that Python 3 has a real-world advantage like that over Python 2. "Python 2 will eventually be unsupported" is a problem, but no one is providing free money to do the transition, so for many people it's just a fact.
Historically Python has been very good about supporting backwards compatibility and smooth upgrades. I hope that the Python 2->3 transition was an anomaly. You can make changes to languages, but you have to make it easy for users to do the transition.
node = node.next = red_node
If assignments were right-to-left, the .next attribute would be set on the blue node.If they were left-to-right, it would be added to the red node.
The AttributeError exception shows it is the latter order.
Had Scheme been Netscapes scripting language instead of Javascript then I could easily see many of the less dedicated developers and hobbyists getting frustrated at S-expressions and such like. I mean I love functional programming but even I cannot deny that the learning curve is steeper and S-expressions are less readable (at least to an untrained eye) than Javascript is.
So my point was if Javascript didn't exist then I suspect there would be enough demand to either dumbdown / bastardise Scheme, or implement another scripting language which was more hobbyist friendly (also hence the VBScript quip).
These examples in the PEP, they all seem bad to me, written to be too clever by someone who doesn't think well.
E.g.:
filtered_data = [y for x in data if (y := f(x)) is not None]
How about: filtered_data = [y for y in (f(x) for x in data) if y is not None]
Or just? filtered_data = filter(None, map(f, data))
(If f() can return non-None "Falsey" values then this would require a predicate function. But maybe f() is badly designed?)Or this:
if any(len(longline := line) >= 100 for line in lines):
print("Extremely long line:", longline)
What's wrong with the "old-fashioned" way? for line in lines:
if len(line) >= 100:
print("Extremely long line:", line)
break
Of course, in the bad old days when loop variables leaked I think you could just write: if any(len(line) >= 100 for line in lines):
print("Extremely long line:", line)
But I'm not sure, and in any event leaking loop vars was fixed at some point. (Good. It was useful but bug-prone. Just like this PEP!)To me this is also a mess:
results = [(x, y, x/y) for x in input_data if (y := f(x)) > 0]
It should be: results = [(x, y, x / y) for x, y in zip(input_data, map(f, input_data)) if y > 0]
Maybe all the examples are just poorly chosen, but I don't believe that. I think this PEP is just a bad idea. Badly bad.There seem to be new semantics in the interaction with comprehensions, which is one of the main sources of controversy in the discussion these linked as the OP.
def __enter__(self):
return self
which is a common pattern (used by open()), but there's no requirement that __enter__() return the same object.In cases where __enter__() does something different, the assignment expression and the 'as' variable would have different values (the object of the 'with' statement, x, and the result of calling x.__enter__(), respectively).
I very frequently use something like the following:
[node.find(SOME_XPATH).get("value") for node in tree if node.find(SOME_XPATH) is not None]
Which I can soon rewrite as: [found_node.get("value") for node in tree if (found_node := node.find(SOME_XPATH)) is not None]
There's a certain amount of complexity introduced, but I think removing the duplication makes up for it. This is one of the few remaining cases in Python where I feel like there's not a simple way to avoid repeating myself. if reductor := dispatch_table.get(cls):
rv = reductor(x)
elif reductor := getattr(x, "__reduce_ex__", None):
rv = reductor(4)
elif reductor := getattr(x, "__reduce__", None):
rv = reductor()
else:
raise Error("un(shallow)copyable object of type %s" % cls)
especially when you compare it to the existing implementation: reductor = dispatch_table.get(cls)
if reductor:
rv = reductor(x)
else:
reductor = getattr(x, "__reduce_ex__", None)
if reductor:
rv = reductor(4)
else:
reductor = getattr(x, "__reduce__", None)
if reductor:
rv = reductor()
else:
raise Error("un(shallow)copyable object of type %s" % cls)
and i think you are leaving out functional languages which share python's readability, if not surpass it, while remaining much more expressive. that's f# and ocaml.
f#, in my opinion, is superior in everyway to python and subsumes python's abilities of readability, easiness, oop, and scripting, while greatly raising the ceiling of possibility. it's criminally underused, especially in areas where python has been chosen.
and i disagree lisp is harder to get into. racket is just as easy to learn as python, if not easier, due to its regularity. the how to code / systematic program design course on edX and the book how to design programs showcases this.
foo%$%bar %>%>% f()
is equivalent to (f(foo(bar)), f(bar(foo)))
I'm just kidding, but please no more cryptic symbols in Python's syntax, if it can be solved with functions instead. We have other languages to scratch that itch.I haven't read the PEP, so I don't know the trade-offs, but I'm not loving this (:=) syntactic sugar either.
> preferably only one obvious way of doing it
I don't think that is true at all. Python allows you to perform many tricks, with overloading operators etc.
Operator overloading in general only seems to be practical for mathematical types like sets or sequences where the rules are well-defined thanks to generations of people thinking about them. Yet, even the set case works poorly for C++ because in addition to associativity new operators also inherit the built-in operator precedence rules. For example should * take precedence over + in set operations? (Assuming you implement * for cartesian product and + for union.)
Maybe C++ has changed since I used it but this sort of thing really gets in the way of writing correct code.
For more advanced list compherensions, even JavaScript (ES6+) is more readable.
Pyhton is not really that kind of language.
tree.map(node => node.find(SOME_XPATH)).filter(Boolean).map(node => node.get("value"))
I can deal with either language at this level of complexity. Anything more complicated needs more LoC in either language.
There's nothing technically compelling in Python 3. It's all developer candy.
Python 2 is going to remain stable for years, even decades. It's FORTRAN. Development on the C interpreter will stall, but what that really means is that it will become asymptotically bug-free. If no one is touching the code except to apply bug-fixes there are no sources of new bugs (okay bug-fixes will introduce bugs themselves, but the net trend will be negative.)
Even if you have to use e.g. Tauthon (a Python 2 that brings in a few select features from 3) you can still expect a more stable environment than Python 3 development.
And there are a lot of Python 2 runtimes and other places where Python 2 syntax is still welcome: Stackless, Pypy, Cython, Nuitka, MicroPython, Brython, etc... Python 2 hasn't gone anywhere.
Frankly, I think it's a damn shame IMHO that the Python 3 interpreter can't run Python 2 code! Not even with a command line switch. It should be able to switch syntax per module/file at least.
Anyway, to repeat, to me Python 3 is full of dubious toys for developers but has no compelling business value to drive adoption. It's cannibalizing the Python 2 community and ecosystem, but without that nobody would use it. It's not better than 2, just different.
If you were to start a new project and had to decide on a language, you would be better off (IMHO) with Rust or Go (or Python 2) than Python 3, generally speaking.
Let me turn the question on it's head: Python 3 adopters, can you give me a reason, a technical reason, why I should chose P3 over P2 for any project?
I'll throw out some things right off:
Performance. No. If you're using Python and worried about performance you've already committed stupidity and I'm not interested. Cython is your friend.
Async. No. I write Twisted code. Async has been done. Adding keywords to the language is stupid. (As an aside, I know Twisted is hard to use, and the docs sucked until recently, but it is a damn fine code base, written by really good and committed people, and it's an unsung under-appreciated treasure chest! Any Python networking code that does anything serious and doesn't use Twisted is probably doing too much work, poorly. The last thing the language needed was "async" as a keyword. "Foooooootguuuuuuun Jenkins!")
F-strings. No. More stupid and there are dozens of libs that do it.
Support? No. I can maintain the P2 code on my own if need be, but I don't need to because there are others, e.g. Tauthon, etc.. and I can support them. Plus as I said above, with no innovation comes no new bugs!
Type hints et. al. Ummm..., No. I was really stoked about this, but the first time I tried to use it (MyPy) I immediately broke it. I was doing things with the default keyword argument dict that just couldn't be represented in the type language. Valid useful Python code that the types can't handle is a thing. I like strong types, but Python has never benefited from over-nicety in typing. Quack, quack. Put another way, the coercive-type subset of Python is great but overly restrictive. (Which is not inherently bad, I think Elm-lang is brilliant!)
I love Python 2 because it has such a clean syntax, it deals sensibly with all the little fiddly bits that e.g. C makes you think about, and its semantics are powerful and flexible but elegant enough that you can hold the model (or a reasonable homologue) in your head.
From my POV Python 3 just messes up 2. I see no compelling technical reasons to use 3.
That being said, mypy supports type hints in Python 2 code, so that really just adds to your argument.
def f(F, *args):
f.reductor = F(*args)
return bool(f.reductor)
if f(dispatch_table.get, cls)):
rv = f.reductor(x)
elif f(getattr, x, "__reduce_ex__", None):
rv = f.reductor(4)
elif f(getattr, x, "__reduce__", None):
rv = f.reductor()
else:
raise Error("un(shallow)copyable object of type %s" % cls)
Same pattern works for re.match objects.Once you abstract the patterns you can drive towards data-driven code:
K = (
(dispatch_table.get, (cls,), (x,)),
(getattr, (x, "__reduce_ex__", None), (4,)),
(getattr, (x, "__reduce__", None), ()),
)
for F, a, b in K:
reductor = F(*a)
if reductor:
rv = reductor(*b)
break
else:
raise Error("un(shallow)copyable object of type %s" % cls)
Syntax is the enemy, never substitute syntax for thought.I would suggest that if you can express the exact same semantics with a "few" more lines then it's just sugar.
In the case of x := y, it's always possible to rewrite the program with a "few" extra lines where it means the same thing. It's just combining the assignment and expose operations.
I made a flake8 plugin to forbid assignment expressions:
https://github.com/Korijn/flake8-assignexp
Will be released once python 3.8 is out and I can test it.
Unless you can provide an example where that isn't true, it's just sugar, i.e. unneeded, but maybe desired, syntax.
[n.get("value") for n in filter(None, (node.find(SOME_XPATH) for node in tree))]
Or just: F0 = lambda node: node.find(SOME_XPATH)
F1 = lambda node: node.get("value")
result = map(F1, filter(None, map(F0, tree)))
def odd_range(count):
return (x for x in range(count) if x%2)
for n in odd_range(100):
...
As for the second one, I'm just not too happy with the implied two loops (even if it amounts to only one in practice).I was doing something that was totally "plain vanilla" in the Python semantics but outside the assumptions of the type checker.[1] It made me realize that "strictly-typed" Python is a subset of actual Python. It's not that "strictly-typed" Python isn't a useful language, it's that if I really wanted strict typing I'd use e.g. Haskell or Elm, where you get so much more bang for the buck it's not even funny.
"Duck-typing" is a trade-off. In that sense, strict typing actually kinda weakens Python 3, because you're losing [the full advantage of] duck-typing but not gaining [the full advantage of] strictly-enforced type-checking and type inference. And, as you point out, the lesser advantages of type hints are available to Python 2...
[1] I was experimenting with a purely functional web template sytem and I was using the keyword args dict as a sort of "grab bag" of values that functions could pass around, picking out the values they were "interested in" by named keyword parameters. It worked fine but there was no way to tell MyPy what the type of the dict should be.
All this for the small cost of learning a few simple scoping rules and the negligible cost of occasionally discovering the special cases in which the new syntax cannot be used.
The first thing I checked is your very vocal assurance that F# is a better scripting language than Python. That seemed very weird to me, after all it's Python strong point. Since I script a lot, I looked for the most popular F# lib to parse script arguments.
Argu seems the winner, according to http://fsharpworks.com/survey.html. Their tutorial is pretty good (https://fsprojects.github.io/Argu/tutorial.html), and here is their hello world. 24 lines of, packing a dense symbology and using a lot of the specific language features:
open Argu
type CLIArguments =
| Working_Directory of path:string
| Listener of host:string * port:int
| Data of base64:byte[]
| Port of tcp_port:int
| Log_Level of level:int
| Detach
with
interface IArgParserTemplate with
member s.Usage =
match s with
| Working_Directory _ -> "specify a working directory."
| Listener _ -> "specify a listener (hostname : port)."
| Data _ -> "binary data in base64 encoding."
| Port _ -> "specify a primary port."
| Log_Level _ -> "set the log level."
| Detach _ -> "detach daemon from console."
let parser = ArgumentParser.Create<CLIArguments>(programName = "gadget.exe")
let results = parser.Parse [| "--detach" ; "--listener" ; "localhost" ; "8080" |]
printfn "%A" results.GetAllResults();;
The same thing with click, the Python most popular solution, is 11 lines, and it's shaped around almost only regular calls and parameters: import click as cli, base64, urllib.parse as url
@cli.command("gadget.exe")
@cli.option('--working-directory', help='specify a working directory.', type=cli.File('rb'))
@cli.option('--listener', help="specify a listener (hostname : port)", type=url.urlparse)
@cli.option('--data', help='binary data in base64 encoding.', type=base64.b64decode)
@cli.option('--port', help='"specify a working directory.', type=cli.File('rb'))
@cli.option('--log-level', help='set the log level.', type=int)
@cli.option('--detach', is_flag=True, help='detach daemon from console')
def hello(**kwargs):
print(kwargs)
hello(["--detach", "--listener", "localhost:8080"])
I have a hard time finding the motivation to look for the truth behind your other arguments after that.So, basically, they have a proposal for an expression that explicitly says "I want to leak this variable into the enclosing scope" and it is less useful than the implicit old way?
Python's variable scoping rules are a mess anyway -- some are function level, some are block level and I usually can't tell which is which until the compiler complains...
for odd_numbers in [n for n in range(100) if n%2]:
after a quick "stupid python" comment.
I use Python the most so maybe I'm just used to it but I always found scoping pretty straightforward.
Do you have an example that you found surprising?
You can absolutely define a pipe operator to apply functions. Something like
class Pipeable(object):
def __init__(self, f: Callable):
self.f = f
def __or__(self, other: Callable):
return Pipeable(other(self.f))
Then use it like composed = Pipeable(foo) | f
Replace `|` with `>>` if you feel that conveys what you want better. Libraries already exist which provide this interface.>Is that anti-Pythonic?
In a sense, yes. Operators and methods are defined in terms of objects, not in terms of abstract types.
I don't know what you would lose by having if as an expression. It is easy to notice when it is used in expression context, and there is no extra computation that needs to be done.
It was sort of addressed with the trenary operator, but that quickly becomes ugly.
map & filter work great when one can chain methods (no matter if via methods or %>% pipes like in R) and with decent lambda functions.
If a thing is uglier than in both JS and R... well, it is not a good sign.
if '__add__' in a.__dict__:
try:
return a.__add__(b)
except NotImplemented:
if '__radd__' in b.__dict__:
return b.__radd__(a)
elif '__radd__' in b.__dict__:
return b.__radd__(a)
raise TypeError('unsupported operand type(s) for +: '{}' and '{}'.format(type(a), type(b))
In particular, the runtime seems to directly look for '__add__' in the object's __dict__, rather than just trying to invoke `__add__`, so your `__getattribute__` method isn't enough to make it work. If you add an actual `__add__` method to A your example will work.Yes, and in situations where I can't do that, I use only the features available in both versions. Currently most of the code I write is python2.7 and python3.6 compatible. Its not that difficult to do so.
what are your objections? what is the dense symbology?
discriminated unions (what the CLIArguments is) are very simple to define and understand. the usage member nearly uses the simplest possible pattern matching available. pattern matching is a staple of functional languages. it's a case statement in its simplest use but is so much more in general.
these two things are the bread and butter of f#. they may take a modicum more initial effort than simple function calls, but it pays off in readability and expandability. it seems python takes the easy route. it makes things apparently simple at first but difficult in the long run.
i know both languages, to a degree, and find the python hard to read. it's also faking types, which is kind of funny. the f# code is fully typed.
lines of code is meaningless to me here because the f# has better delineation of concepts here.
and lastly, there's actually no reason why you couldn't write an f# library to behave like the python one here. that is not true the other way around. that's the power of f#'s multi-paradigm nature.
Perhaps not yet, but soon enough, sure.
The year is 2025. You want to set up a webserver to do something in python2. The Letsencrypt api doesn't have official python2 bindings, SQLAlchemy hasn't provided any database interfaces for python2 in 6 years, and so you can't use recent versions of postgres without writing your own dbapi.
There's been a couple of major security bugs found in various system tools, and sane flags and defaults have changed as a result, but python2's standard library never updated to call them safely, so you're default insecure. And libraries like smtp and BaseHTTPServer don't support modern standards like HTTP3 and extended MIME types.
You're banking on all that maintenance being easier than porting to python3.7 or whatever (which in my experience is actually very easy once you know what you're doing, you can port a decently sized project in a day or so).
At runtime, your second example is actually significantly less explicit because so much is hidden away in mutable state.
found_nodes = (node.find(SOME_XPATH) for node in tree)
[found_node.get("value") for found_node in found_nodes
if found_node is not None]
or as a one-liner: [found_node.get("value") for found_node in (
node.find(SOME_XPATH) for node in tree) if found_node is not None]
It's 15 characters longer than your := version, at 110 characters instead of 95.I agree. The important question is what we mean by "the exact same semantics". I would say that observational equivalence is the most appropriate; i.e. that no other code can tell that there's a difference (without performing unpredictable side-effects like parsing the contents of the source file). Python is a really difficult language for this, since it provides so many hooks for redefining behaviour. For example in many languages we could say that 'x + x' and 'x * 2' and 'x << 1' are semantically the same (they double 'x'), but in Python those are very different expressions, which can each invoke distinct, arbitrary code (a `__mul__` method, an `__add__` method, etc.). The fact they often do the same thing is purely a coincidence (engineered by developers who wish to remain sane).
It's fine if we only care about the 'black box' input/output behaviour, but at that point it no longer matters which language we're using; we could have something more akin to a compiler rather than desugaring into expressions from the same language.
> it's always possible to rewrite the program
There's an important distinction here too. Are we saying that "a semantically equivalent program exists"? That's a trivial consequence of Turing completeness (e.g. there's always an equivalent turing machine; and an equivalent lambda calculus expression; and an equivalent Java program; etc.)
Are we saying that an algorithm exists to perform this rewriting? That would be more useful, since it tells us that Rice's theorem doesn't apply for this case (otherwise it might be impossible to tell if two programs are equivalent or not, due to the halting problem).
Are we saying that we know an algorithm which will perform this rewriting? This is the only answer which lets us actually run something (whether we call that an "elaborator", a "compiler", etc.). Yet in this case I don't know of any algorithm which is capable of rewriting Python involving `:=` into Python which avoids it. I think such an algorithm might exist, but I wouldn't be surprised if Python's dynamic 'hooks' actually make such rewriting impossible in general.
I certainly don't think that a local rewrite is possible, i.e. where we can swap out any expression of the form `x := y` without changing any other code, and keep the same semantics. If it is possible, I would say that such a local, observational equivalence preserving rewrite rule would qualify for the name "syntactic sugar".
> It's just combining the assignment and expose operations.
I'm not sure what you mean by "expose", and a search for "python expose" didn't come up with anything. It would be nice to know if I've missed out on some Python functionality!
What makes you say that? I would say it's crucial. Syntactic sugar is anything where we can say "Code of the form 'foo x y z...' is defined as 'bar x y z...'" where both forms are valid in the same language. Such a definition, by its very nature, gives us an automatic translation (look for anything of the first form, replace it with the second).
> It just means that in all cases a human can rewrite it without the new syntax and get the same semantics.
Yet that's so general as to be worthless. I'm a human and I've rewritten Java programs in PHP, but that doesn't make Java "syntactic sugar" for PHP.
I'm reminded of PHP, where (at least in version 5.*) we could write:
$myObject->foo = function() { return "hello world"; };
$x = $myObject->foo;
$x(); // Hello world
$myObject->foo(); // Error: no such method 'foo'
(Taken from an old comment https://news.ycombinator.com/item?id=8119419 )How about integer arithmetic? That's the programming language Goedel used for his incompleteness theorems (specifically, he showed that the semantics of any formal logical system can be implemented in Peano arithmetic, using Goedel numbering as an example).
I wouldn't call that a useful definition though. There are reasons why we don't treat RAM as one giant binary integer.
Am I the only person still using Python 2 and LetsEncrypt? SQLAlchemy? Postgres? I try to keep to a minimal dependency graph. Even so, writing "glue code" in Python 2 will still be easy, and API description languages like e.g. Swagger will only become more common, so I'm not worried about being left in an incompatibility cul-de-sac.
(And if I am the only person still supporting Python 2 in seven years I'm probably raking in dollars in legacy support contracts, no? You know COBOL programmers make bank, eh?)
> There's been a couple of major security bugs found in various system tools, and sane flags and defaults have changed as a result, but python2's standard library never updated to call them safely, so you're default insecure.
First, do you have links or other references? I'd really like to read up on those.
Second, yes, the standard library needs an overhaul (and the docs are soooo bad to this day.) You got me there. One of my dreams has been to start a "Python Unstandard Library" (sic) and comb through all the existing Python 2 libs and make a really clean and well-documented, uh, library. That's probably not going to happen. That said, back-porting bugs and security fixes from Python 3 is something that still has to be done and I'm one of the people who intends to do it.
So yeah, more free other people's work is one reason to prefer 3 to 2 (if it turns out that way. As in a lot of crucial open source projects, the Python core maintainers are not a large nor well-paid group...)
> And libraries like smtp and BaseHTTPServer don't support modern standards like HTTP3 and extended MIME types.
Networking in Python to me means Twisted. Now if they abandon Python 2 I might have a problem, however, I wouldn't write a webserver in Python for love or money. Python (neither 2 nor 3) is not a great language for high-performance backend servers. (I know that doesn't stop people from doing it.) I would probably use Go. Or write your prototype in Python and then rewrite in Java or C++ if you get funded.
And, to repeat my earlier argument, am I the only person using Python 2 in 2025 who has to use HTTP3?
> You're banking on all that maintenance being easier than porting to python3.7 or whatever
Yep, I sure am. I'm not the only person who feels this way. I'm banking that enough momentum remains in the Python 2 ecosystem and community to keep supporting it for at least twenty to thirty years (if there's nothing better before that I'm going to give up computers.) Heck, we're probably going to see 2.4 in the wild til forever too.
People are trying to kill Python 2 and it's not working. Adding in B.S. like PEP 572 only makes it harder to kill.
That's your aesthetic opinion. Mine happens to be the opposite: I find that code (the second Python snippet) to be elegant and readable.
> (Compare and contrast it with JavaScript; someone posted it.)
let result = tree.map(node => node.find(SOME_XPATH)).filter(Boolean).map(node => node.get("value"));
F0 = lambda node: node.find(SOME_XPATH)
F1 = lambda node: node.get("value")
result = map(F1, filter(None, map(F0, tree)))
Yeah, the lambdas look like lambdas; the pattern is map.filter.map; all thoughts are tidy; F1 and F2 are generic in the node type and reusable, they look like the start of a simple combinator library for nodes. All in all, I like the Python code.Even if you do this:
let F0 = node => node.find(SOME_XPATH);
let F1 = node => node.get("value");
result = tree.map(F1).filter(Boolean).map(F2);
(Is it "let" or "var" these days?) I would say that, although the syntax seems cooler, the Python code is more conceptually elegant because map() and filter() aren't methods of a tree class.But the real gem would be Joy code:
F == [SOME_XPATH find] map [bool] filter ["value" get] map
Joy is the best language. A Joy compiler would be able to "open up" map and filter and write a version of F that did all the work in a single iteration. I'm actually going to be switching from Python 2 to Joy rather than Python 3, FWIW.When the alternative is to use officially supported versions with minimal work, yes. You're likely one of the few people using them, or you're paying someone to do it for you. Like if you're willing to pay someone for support then that's fine, but its not like this is going to be free.
>First, do you have links or other references? I'd really like to read up on those.
We're talking about a future. But python seems to average ~2 CVEs per year, which is a great track record, but means you'll have 10 unpatched vulnerabilities by 2025.
>Networking in Python to me means Twisted.
Ok, that's fine. Do you think the twisted devs are going to support a deprecated python version for 5 years? Especially since the majority of their work recently has been to fully support python3? The writing is on the wall.
>Heck, we're probably going to see 2.4 in the wild til forever too.
Where do you see any open source projects that are compatible with python2.4? (it didn't support `with` or conditional expressions). Sure there's code still running that's written for python 2.4, but its code that was written at $SOMECO 15 years ago and hasn't been touched since. Sure that exists, but that's not a particularly enviable situation.
>am I the only person using Python 2 in 2025 who has to use HTTP3
I mean probably. That's the entire point of using modern, supported tools. They get support and keep up to date with modern technology. There aren't a whole lot of Java 5 compatible libraries that support HTTP2. That's the price you pay for using unsupported legacy software: not being able to use modern technology.
>Adding in B.S. like PEP 572 only makes it harder to kill.
Ah yes, entirely backwards compatible changes truly make it impossible to upgrade! I'm not particularly a fan of PEP 572 (in fact I actively dislike it), but this is just nonsense.
tree
.map(node => node.find(SOME_XPATH))
.filter(Boolean)
.map(node => node.get("value"))
I don't use Python on a daily basis, but I find that infinitely more readable than the comprehension version (plus, it's a syntax used in several major languages)I like C way of assignment being an expression. I think having separate statement and then assignment expresdion is a mess. It's still useful though as Python was missing where keyword like feature from Haskell which is necessary to avoid duplicating computation in list comprehension.
Someone showed me this in a previous thread:
>> [bar(foo_z) for z in stuff for foo_z in [foo(z)] if foo_z]
Not a common idiom, though. Neither snippet is perfectly clear to me, but yours is closer.
This is absolutely dead on accurate. As a Clojure developer, using one of the most expressive -- dare I say, artistic -- programming languages ever created, I can say that I am totally in the zone writing code which is elegant and terse and really packs a punch, does clever things.... and then just a few days later, it is very hard for my own brain to parse my own code and figure out what it does.
For each line of code you write once, it will be read dozens of times by you or others. Code is for reading. Languages that get this right make things a lot easier for everyone.
I started to agree with that but then I realized: I don't think you can get getattr to raise an error here at all. The keys are strings, the default value is provided, I don't think there's a fault path here.
In general though, I totally agree with you. I use less-cute code shamelessly when it will aid debugging. I write for the debugger. ;-)
> At runtime, your second example is actually significantly less explicit because so much is hidden away in mutable state.
Yeah, I see what you mean. You don't know which "F" or "reductor" triggered the error because it doesn't show up in the traceback because each assignment doesn't have its own line in the code. That's a good point. I've been there, and it sucks.
In this specific case I would actually most likely write out the three if.. else.. chain. If there were four or more I would use the gadget style. It's right on the borderline, and I would suspect that there won't be a fourth option in the future (although usually that would be a bad assumption; in this case I speculate the the object model isn't going to change any time soon.)
while (bytes := io.get(x)):
Would currently be written: bytes = io.get(x)
while bytes:
And likewise: [bar(x) for z in stuff if (x := foo(z))]
is equivalently: [bar(foo(z)) for z in stuff if foo(z)]
Perhaps this is just my personal opinion but I don't really think the ":=" (or "as" for that matter) adds much in the way of clarity or functionality. I guess at the end of the day I am neutral about this addition...but if there isn't a clear upside I usually think it's better to have less rather than add more. while True:
bytes = io.get(x)
if not bytes:
break
...
which I think is objectively less readable.In the second example, you have an extra call to foo for every element of stuff. If foo(z) is expensive, you'd probably want to write this as
[bar(x) for x in map(foo, stuff) if x]
instead - which I personally don't mind, but it's arguably not as clear as having the in-line assignment expression. bytes = next()
while bytes:
# do something ...
bytes = next()
It's possible to use while True and break, but that isn't very elegant.In my opinion the Python's explicit self argument is somehow cleaner approach than having distinct block and function/method types. You still need some kind of ugliness in order to implement super(), but for Python 3 that happens at compile time and the resulting syntax is reasonably sane.
As for the aforementioned method context issue CLOS/MOP takes interesting approach of macroexpanding the method definition into something like
(lambda (args next-method)
(labels
((call-next-method &args)
...
(funcall next-method ...)))
(impl (...)
... method body ...))
(funcall impl args)))
Also of note is that in ST, there are no control structures, even if is implemented as method on Boolean instances which takes block argument, with true and false being instances of True resp. False with different implementations of #ifTrue: method.The only dogmatic position here is the Python one, that based on opinion alone, and is leaving a lot of good structures out for no gain at all.
https://docs.python.org/3/reference/datamodel.html#special-l...
Apparently the runtime is even more picky than I showed. The method has to be defined on the object's type, not in the object's instance dictionary. So, really the lookup is something like:
if hasattr(type(a), '__add__'):
The link I provided explains the rationale for bypassing the instance dictionary and `__getattribute__` method.Python's indentation system was so nice for imperative programs. But it gets in the way when functional features are retrofitted. The "everything must be one big expression" style is a total mismatch to Python. The beauty is gone.
+ Just because something isnt "necessary" it doesn't mean it doesn't add value. The problem is just sites that make JS a requirement rather than an optional feature enhancement.
+ Youre talking about stuff from a too recent perspective. Eg Before CSS came into its own, JS was the only reliable way to do mouse over effects (which can add a lot to usability even on regular web pages)
+ Just because JS is abused on current news sites, blogs and other sites that are really just static pages, it doesn't mean that Scheme wouldn't have been abused in the same way.
+ You also wouldn't see fewer developers writing frontend code. They would just use a transpiler (like we see with TypeScript et al) except instead of compiling from a stricter language (in TypeScripts case) it would transpile from a lazier language into a stricter one.
+ Or instead of the previous point (though more likely as well as) you'd still have a non-scheme language in the browser. Possibly even VBScript. Or maybe something derived from Perl. But I guess at least we wouldn't have a language monopoly on browser scripting languages.
Honestly though, I hate JavaScript just as much as you do. But let's not get carried away with our exaggerations :)
[
found_node.get("value")
for node in tree
if (found_node := node.find(SOME_XPATH)) is not None
]
I think I also prefer Ruby/Javascript style comprehensions, but it's worth comparing well formatted code.If Python's gonna have breaking syntax, why not work on bringing it more in line with other modern languages that don't require special breakout syntax for expressions and rely more on functional features?
Are we still maintaining that lambdas are hard but suggesting expression-scoped variables are easy?
Well, if you turn if into an expression the way that you indicated then now you will also need the equivalent of the “begin” expression in order to be able to have multiple statements and/or expressions in either branch.
So then you are breaking backwards compatibility. Which makes it a non-starter from the get go.
And like I said there is also the fact that functions need the return keyword in Python if you want to return a value.
The alternative `f(x) as y` syntax looked nice to my eyes, and doesn't introduce new symbols. However I'm sure they are right, that it would have bad corner cases.
That being said it's another example of how after many years it turns out that C got it right all along. Assignment being an expression (not a statement) makes many useful idioms possible. Sure, there was a problem with typos concerning '=' and '==' being similar but this was solved by convention of requiring parenthesis around assignment if its value is used as boolean. If you turn on warning in modern C compiler you will never make this error again. Interestingly it's also what Python has chosen even though they went with := operator.
Personally I prefer having one assignment operator and assignment being an expression. If it's := or = really doesn't matter in my view.
Of course you can ;) __getattr__ could invoke a network request that fails for all you know.
But yes this was more a general comment on this layout. The data-driven layout, while very explicit in code, is actually awful when you encounter issues at runtime, for exactly the reasons you describe.
While I'm not a fan of this PEP's syntax, I will say that I do think it someone helps here, it reduces the boilerplate from 3-4 lines in some cases to a single line, which in practice makes this more visually declarative, and keeps tracebacks sane.
That said this pattern is rare enough for me that I don't think I'll be using this tool anytime soon.
[bar(x) for z in stuff if (x := foo(z))]
is not equivalent to: [bar(foo(z)) for z in stuff if foo(z)]
because here, foo(z) will be called twice. If foo is an expensive operation or performs some other magic in the background that you don't want to call a second time, := lets you express this behavior without having to break your expression out into a huge for-if-append code block: output = []
for z in stuff:
x = foo(z)
if x:
output.append(bar(x))
Of course, the above verbose language might end up being a better code in the end, because mashing together a bunch of terse code is often not very readable. But in real life, transforming a comprehension into a loop is just a bunch of work that gets in the way of me trying to complete a task.When I'm developing out an idea and just want to see if the code works and passes my tests, I would rather have the option to insert behavior swiftly without having to reformat an entire block of code. Then when reviewing my code or deciding whether to make it reusable for something else, I can go back and clean things up.
It was kind of a rhetorical question. I know I'm not alone. Anyway, if I'm still doing Python at all in a few years I'll be willing to pay people to work on 2, or likely be one of the guys getting paid to support other folks' legacy (or fresh) Python 2 code.
But I already admitted getting more free work from other people is a reason to prefer 3 to 2.
> 10 unpatched vulnerabilities by 2025.
I can handle that. (I mean I can patch P2.)
> Do you think the twisted devs are going to support a deprecated python version for 5 years?
Maybe. But I know they aren't going to destroy the P2 code, and I can patch it if I really need it. It won't be that hard to back-port from 3 to 2, will it?
But again, I don't like Python for network code. I would use something else. If I really really wanted to write non-prototype network services in Python I would probably embed the interpreter in some other solid server to do the heavy lifting and just use Python to write the dynamical bits. But then I would probably just use Lua. Or write the thing in Erlang! What I'm saying is, if I had to choose a language in 2025 to write a server and my choices are Python 2, Python 3, or any other language, my answer is not going to be Python 3 or 2.
> Where do you see any open source projects that are compatible with python2.4?
On PyPI. But that's not my point. My point is that even 2.4 hasn't disappeared yet. There's going to be a "long tail" if you will of P2 in service.
> Sure that exists, but that's not a particularly enviable situation.
You do realize I'm not being conservative for its own sake, eh? I don't envy anyone stuck maintaining a 2.4 codebase, but I also don't envy the folks who have to wrangle with P3.
> That's the price you pay for using unsupported legacy software: not being able to use modern technology.
I just don't think that that price is going to be too high in the case of Python 2. I'm hopeful that there will remain enough critical mass to carry it forward even after official support ends and even with all the effort to deprecate it.
In fact, I think the effort to deprecate it is kind of rude, and a better way of going about the whole thing would have been to have the P3 interpreter support P2 code and let conversion to 3 be driven by its merits. The incompatibility was a choice, and I think it's the wrong choice.
Let me ask you this: If someone else, other than Guido, had come forth with a project called e.g. Python++ (just P3 but with a different name) and tried to get people to use it, how well do you think adoption would go? People would only use it if it had some seriously compelling features, eh? I mean people barely notice Stackless Python and Cython (just two examples of Pythons with extra abilities and compelling use cases) so do you think folks would flock to Python 3 if the BDFL weren't pushing it while simultaneously deprecating Python 2?
All I'm really saying is, if you think of P3 as a separate project, and pretend that P2's wind-down is just a coincidence, then it makes total sense to make arrangements to keep supporting and using P2. (Especially if, like me, you prefer it.) I don't think it will take that much to keep a fork project viable (Tauthon or whatever) since it would eschew changes and just be maintenance upkeep. It's not the first time users have forked a legacy project to keep it alive after the originating body abandoned it.
Lastly, when I said "PEP 572 only makes [Python 2] harder to kill." I meant that it makes people like me double-down on our commitment to Python 2. "From my cold, dead hands!" and like that.
From my reading of the PEP, assignment expressions specifically prohibit type annotations.
We maintain that list comprehensions do all the things lambdas can without the clunky anonymous functions. All while clearly communicating that a sequence is being transformed without any obscure syntax.
What wrangling is there?
>All I'm really saying is, if you think of P3 as a separate project, and pretend that P2's wind-down is just a coincidence
This is an unhelpful comparison. The same thing is true for approximately any feature for any open source project. Would a fork of go which adds generics take mindshare despite being "better"? No! (such forks exist). But that doesn't preclude the official Go implementation from adding generics.
>The incompatibility was a choice, and I think it's the wrong choice.
I'll repeat that most of the code that I write is python3.6 and python2.7 compatible. The only incompatibilities are things that are essentially bugs in python2/not fully specified (unicode vs. string handling), and then a bunch of additional features. It took me all of 3 hours to port a relatively meaty python2.6 application to being 3.6 and 2.7 compatible (and it was a hell of string + unicode nonsense). All it requires is good tests and some dedication, and porting is really, really easy. People just don't seem to believe that.
>It won't be that hard to back-port from 3 to 2, will it?
Worst case for you: twisted begins to support asyncio and rely on python's own event loop. Then twisted code magically gets async/await support in python3 and backporting features becomes potentially impossible.
re.match shouldn't return None at all. I often write helper functions like:
matcher = lambda r, s: getattr(re.match(r, s), 'group', lambda i: '')
o = matcher(r1, s)(1) or matcher(r2, s)(3)
here matcher have a fixed, static return type, string. [bar(x) for x in map(foo, stuff) if x]
I was just trying to be as close to the original as possible.I do find it odd you call out the calling of `foo` twice as a performance drain but then use the chief example of:
> When I'm developing out an idea and just want to see if the code works and passes my tests, I would rather have the option to insert behavior swiftly without having to reformat an entire block of code.
Tests are not something I (personally) consider to be performance sensitive. But like I said I am sort of neutral on this change. I don't really see a massive benefit to it, so I would personally air on the side of "don't add" but I'm not mad it's being added. It's a "meh" for me.
i don’t know what you mean. actual argumennts? do you mean like “—working directory” or the values passed by them? i am actually not familiar with this library, but it seems the former is handled by the library from the discriminated union constructor names and the latter are right there in the constructors.
and what do you mena there’s no need? that seems rather arbitrary. it’s a way to represent your data explicitly with types, i.e., is type-driven development.
i can’t further defend this library because i have never used it, but i see no confusion here and don’t even understand the complaints. it seems to be “this is different than how i think in python, so it’s no good”.
Goddamnit, I think you just changed my mind. :-)
However lambdas are mostly discouraged because they're often harder to read for anything non-trivial (and really you can't do much non-trivial with them since they are so dwarfed) and you can just define a function in the same block and pass the function reference in place of a lambda. I think the premise is more that it's clearer to pass a clearly named method describing the intent instead of a lambda describing the how.
> What wrangling is there?
Every time a PEP modifies the syntax or semantics (e.g. ordered dicts for class namespaces) any project that wants to conform to Python 3 has to "wrangle" those changes into their thing. There are more Pythons than the C Python, not to mention tools like Snakefood[1]. That's in addition to "wrangling" with new bugs introduced into the interpreter while adding support for new features, and with the new sources of bugs that the new features and syntax enable.
A stable unchanging Python 2 doesn't engender any of that overhead. You just have to fix bugs and security, uh, bugs.
> > All I'm really saying is, if you think of P3 as a separate project, and pretend that P2's wind-down is just a coincidence
> This is an unhelpful comparison.
Unhelpful to whom? I'm trying to explain my reasoning around sticking to Python 2 and that's how I look at it. It doesn't matter to me that the BDFL is the one pushing this, as much as I like him he's not an actual dictator. He's pissing in my soup and I don't mind saying so. (Yes, he made the soup in the first place but some of it's in my bowl now and he can't have it back.)
> The same thing is true for approximately any feature for any open source project. Would a fork of go which adds generics take mindshare despite being "better"? No! (such forks exist). But that doesn't preclude the official Go implementation from adding generics.
I don't understand. P3 isn't a fork which adds a few features, that's Tauthon. P3 is a different language that's almost-but-not-quite Python 2.
I'm not saying Python 3 shouldn't exist, that's a separate argument I'm not making. What I'm saying is given a choice between stable Python 2 with a decaying maintenance cost and Python 3 with its open-ended maintenance cost and no compelling technical reasons to adopt it, well, I'm sticking with 2.
> > The incompatibility was a choice, and I think it's the wrong choice.
> I'll repeat that most of the code that I write is python3.6 and python2.7 compatible. The only incompatibilities are things that are essentially bugs in python2/not fully specified (unicode vs. string handling), and then a bunch of additional features. It took me all of 3 hours to port a relatively meaty python2.6 application to being 3.6 and 2.7 compatible (and it was a hell of string + unicode nonsense). All it requires is good tests and some dedication, and porting is really, really easy. People just don't seem to believe that.
You're not wrong, I've done it too. But all I got out of it was Python 3 compatibility... So far your best argument for conversion is "people will support it" and I keep telling you "people will support Python 2 too".
But the incompatibility I'm referring to there is the one I mention in the preceding sentence: "a better way of going about the whole thing would have been to have the P3 interpreter support P2 code and let conversion to 3 be driven by its merits." The Python 3 interpreter should run Python 2 as well. That's the "choice" I think was wrong.
> Worst case for you: twisted begins to support asyncio and rely on python's own event loop. Then twisted code magically gets async/await support in python3 and backporting features becomes potentially impossible.
That actually made me laugh out loud. Cheers. But that still wouldn't break their existing code. And again, I wouldn't use Python for [cutting edge] networking. I've done it. It's good for prototypes.
Look, maybe there will come a day when there's some P3 lib I just have to use in one of my projects and it's somehow just so stitched into the Python 3 that I can't backport it or imitate its magic in Good Ol' Two. If that day comes, well, "porting is really, really easy". ;-)
(It's been really fun discussing this with you today. Cheers and warm regards!)
[1] "snakefood: Python Dependency Graphs" http://www.furius.ca/snakefood/
Or, as I like to call it, "The same lie they've been telling for a decade, that everyone knows is false because lots of people learn JavaScript and do not find it excessively complicated."
In C it's 100% clear, variables only live inside their respective blocks so if you want to mess with the value in an if-else you have to declare it outside of the block.
We can also see a migration away from list and object comprehensions in languages that support both them and lambdas (e.g., C#, Haskell).
How the Python community has maintained it's opinion that everyone else is stupid and wrong while also suggesting that their tooling is impossible to understand is beyond me. It seems like a contradictory position.
Python is designed in such a strangly arbitrarily inconsistent, hypocritical and opinionated manner.
"comprehension list" in french can be translated to "liste en intension"
> Could you elaborate on that?
It's a very opinionated statement on my part. `if COND then TRUE-CASE else FALSE-CASE` is the correct form to use, in my opinion. Python uses `TRUE-CASE if COND else FALSE-CASE`.
The only reason Python is used is because it's popular so it seems to fit your criteria outlined in this comment just fine.
Sorry, but it's preposterous to bring up an implementation detail of the standard library that has long been in the process of being fixed. CamelCase was deprecated a long time ago in favour of snake_case; what is left of CC is for backward-compatibility and will eventually disappear. This is all documented.
The standard library is not the language, it's much messier and suffers from all sorts of issues that have nothing to do with the language itself.
tree.xpath(TARGET_XPATH) I beleive. Aa
Point is the comprehension is flawed to start with. If you need f(x) on both side, then f(x) should happen first, rather than doing it inside and assigning it. That requires a new user to know the if part (at the end) gets executed first, which is dirty.
>>> [a for a in range(10) if True else range(2)]
File "<stdin>", line 1
[a for a in range(10) if True else range(2)]
^
SyntaxError: invalid syntax
And this is the argument why I can't have my wish, because the standard `for` loops have always accepted `if else`, so it would be a backward incompatible change.That said, I have another idea: an update to the comprehension syntax which would omitting duplication of variables, using a new "for in" construct. For example, this line:
(x for x in range(100) if x%2)
...could be written as: (x for in range(100) if x%2)
Just an idea... :D> it's opinion that everyone else is stupid and wrong
I can't speak for the BDFL, but I don't believe this is the intent. It's not meant to say everyone else is wrong and Python is right, rather that Python prefers fewer ways to do things and certain complexities aren't worth it when there's a good way already in the language to do $feature. Complexity may not be in the python layer even, it could be maintenance in the underlying implementation (CPython, PyPy, Jython, ...)
A lot of the same numeric tower decision exist. There's a big schism between List Comprehensions and not, there's no simple way to ship closures around, and Python's concurrent I/O story is a sizzling hot mess.
JavaScript has plenty of problems. So does Python. So why does Python maintain everyone else is stupid for embracing programming constructs that have been deployed successfully since the 60's?
That's the big irony. Python embraces some of the most complex, detail oriented aspects of function-first programming with decorators, but then throws away most of the payoff. It's like eating the peel of a banana and refusing to eat the interior.
Also, click is weird because it wants to take over traditional function syntax and "under the covers" rewrite them. Compared to a much simpler Args -> Map kind of construction, this is a great example of how Python introduces unneeded complexity and prefers to create huge spelunking expeditions into its poorly-explained function invocation syntax & semantics. The PEP we're all commenting around is another great example of that approach. It's too bad Python's community is often more interested in novel uses of Python's semantics than actually readable, reusable concepts.
The irony is other "deep in their own waters" approaches produce stuff that's much more readable than click's without also being some kind of solve-the-universe's-problems black-box. Python dooms itself to that because of its refusal to embrace more composable primitives. They'll always end up with completing big-bang projects that don't play well together. Examples available upon request.
Lbbbx bv bbbbb v
De te. Es. ::: BBC radioThis is a really uncharitable interpretation of what he did. It was specifically mentioned as a scripting language, and parsing command line arguments is frequently necessary for scripts.
He then compared the hello world examples for the two most popular command line argument libraries. This is hardly going out of his way to find some bizarre corner case examples - it's the primary example for one of the most important libraries for the advocated use case. He never said anything about any programming languages besides Python and F#.
You can disagree with him without twisting his argument around.
def get_reductor_value(cls, x):
reductor = dispatch_table.get(cls)
if reductor:
return reductor(x)
reductor = getattr(x, "__reduce_ex__", None)
if reductor:
return reductor(4)
reductor = getattr(x, "__reduce__", None)
if reductor:
return reductor()
raise Error("un(shallow)copyable object of type %s" % cls)
It's pretty rare that I write multi expression anonymous functions in js, and the language appears to be doing everything in it's power to make them wholly unnecessary.
My problems with the F# example:
1. It internally rewrites the argument names. "Data" becomes "--data". Who knows how I can get a short flag (-d) in f# land.
2. You don't declare things at once. You define all the arguments the. You define all the descriptions, etc. This isn't the fault of the language necessarily, but this API is bad. It's highly repetitive (each modification to an argument requires a new match expression it seems, that's gross).
3. It's unclear if and how this is extensible. I can continue to decorate main to add more args (for example if I wrap an existing tool and define additional flags), in python in an obvious way. This is not obvious in F#. This compounds #2.
Your objection about kwargs isn't particularly compelling. Kwargs are exactly as "stringly" typed as method names, which is exactly how you're doing modifications in F# (or Java or anything else).
And density of the python is trivial to fix by adding newlines.
As an aside, I'm the third (or fourth?) to tell you that your responses appear incredibly disingenuous. And that you are the one acting dismissively, not the post you originally responded to. I don't see any universe where going to the effort of comparing the two languages in question in a reasonable benchmark and sharing it is anything approximately dismissive.
It's only because Python is familiar and Haskell is not. Objectively both Haskell and Lisp are dramatically more simple to comprehend than python.
Ruby returns the last thing in a method, which I feel is pretty sane.
> The two examples parse exactly the same set of arguments, so your claim that the python one ignores complexity seems naive, whether or not this was an intentional oversight im unsure of. Or perhaps it's just significantly simpler ;)
You ignored that bit of my post. I said the F# one hints at how to do more complex things. The click example (I use click all the time, by the way) has completely different methods for introducing other constructs or doing custom logic. The F# version just uses lambdas (a sore spot on Python) to do this.
> 1. It internally rewrites the argument names. "Data" becomes "--data". Who knows how I can get a short flag (-d) in f# land.
You'd annotate it. This is why I said the original example offer was poorly researched. They literally didn't scroll down less than 2 screen widths. You didn't even bother to check either. You'd add an annotation to the delclaration such as: [<AltCommandLine("-d")>]
> 2. You don't declare things at once. You define all the arguments the. You define all the descriptions, etc. This isn't the fault of the language necessarily, but this API is bad. It's highly repetitive (each modification to an argument requires a new match expression it seems, that's gross).
You seem to have chopped this up editing it, but I think the gist is that there is a repeated value here in the match. You seem to think that's bad because of DRY principles and because it's easy to make an error.
It is not easy to make an error. F# is statically typed and checked. It'd be an awful decision in a language with no static correctness checking, like Python. In F#, the compiler will flag you with a non-exhaustive pattern match error and tell you which values you're missing! You can read more about that here: https://fsharpforfunandprofit.com/posts/correctness-exhausti...
As for the decls, this is fine. Not only does the compiler have your back, but it's quite normal to have a fair amount of separation between your main loop and your arg parsing. In fact, you almost certainly have it in F# due to the way args are loaded. You'll usually have a separate file containing those data decls so that you can use them without importing your main loop (F# has a bit of ugliness here imo; modules have a defined linear load error and you can't import code in complex graphs). Remember, unlike a stringly-typed map you'e getting back a strongly typed, bespoke discriminated unions that contains exactly the values you expect.
If you'd prefer map-key syntax instead of a typechecked function map (because you hate autocomplete and static correctness) to provide your usage strings... then it's trivial to do that in F#. F#'s documentation is awesome at avoiding incidentals and focusing on the task at hand though.
> 3. It's unclear if and how this is extensible. I can continue to decorate main to add more args (for example if I wrap an existing tool and define additional flags), in python in an obvious way. This is not obvious in F#. This compounds #2.
Since you don't know F#, is it that surprising it's unclear? But anyways...
You don't want it to be extensible. You want to edit the struct and then also edit the error handling and specifications. You want to give the compiler a chance to catch your errors. If you remove an option, the compiler will tell you every place you need to change it. If you add one, nothing will break except your usage map and your error handler (and possibly in extremely weird custom parse cases that can come up, but don't custom parse over unquoted strings and you're fine).
> Your objection about kwargs isn't particularly compelling. Kwargs are exactly as "stringly" typed as method names, which is exactly how you're doing modifications in F# (or Java or anything else).
I think you misunderstand how this works. Incorrect java method names are a compile time error. In Python a missing kwarg is a runtime error. In some cases, a python linter can catch it via heuristics. That's not always the case, particularly across library or communication boundaries.
> And density of the python is trivial to fix by adding newlines.
The same could be done for the F# variant by removing them. That doesn't mean it's a good idea.
> As an aside, I'm the third (or fourth?) to tell you that your responses appear incredibly disingenuous.
I don't think I've been at all dishonest about my intentions. I don't like Python, I think this new PEP makes an already bad, I think the justification for it is bad, and everything in Python would be better without it. What am I being "disingenuous" about? Am I engaging with you dishonestly?
> And that you are the one acting dismissively, not the post you originally responded to.
I bad attempt at dismissing F# by refusing to even read one tutorial page by a Python user in 2018 does not deserve anything but derision. Your attempt to defend it without even checking to see what the author did is not quite so bad, so I'm being more civil with this response, but I think it's in a similar kind of Pythonista conversation that goes, "I don't know and I don't want to know, because Python is fine."
> I don't see any universe where going to the effort of comparing the two languages in question in a reasonable benchmark and sharing it is anything approximately dismissive.
Well hopefully this post has helped illustrate some of the considerations that make that comparison more reasonable? I genuinely mean that! I could have skipped over or not explained the part about why the separation doesn't matter, how F#'s compiler has a linear load order, or why it's not problematic (and in fact desirable and autocomplete-assisted) for a bit of repetition.
I think it's a mistake to approach language comparisons looking for ways to dismiss it until you've written at least a few trivial programs in said language. I've written tons of Python and a fair sum of F#. And honestly, I'm not a huge fan of Argu as a library compared to something like optparse-applicative from Haskell. But it's much more serviceable than this comparison implies. I particularly dislike Python's solution because it's a series of kwarg invocations with signature overlap. I've been burned by this in the past, and I dislike the, "here is the secret method that does EXACTLY what you want" approach that Python decorator patterns usually demand.
This isn't true at all. Async/await is just a nice way to think about chaining callbacks. They're still callbacks. You can await raw promises and lambdas. Callbacks are still used when they ought to be used, which is in situations where there isn't an "end" to the deferred computation.
> It's pretty rare that I write multi expression anonymous functions in js, and the language appears to be doing everything in it's power to make them wholly unnecessary.
I don't agree with that assessment and I think a lot of Javascript programmers would too. You still see multi-expression lambdas used a lot, and as I've said elsewhere async/await is just a way of monadically handling lambdas with the appropriate call signature.
Heck, the entire point of the PEP we're discussing is to try to make it easier to write what'd trivially be recognizable multi-expression lambdas into things like list comprehensions or for decls. What a victory! Now you have to read the entire list comprehension at least twice!
As for this pep, I don't really like it, I think it adds dubious value and overly terse syntax. But that has nothing to do with the general idea that multi expression lambdas are unnecessary.
What you are talking about is a different kind of expression, similar to a ternary operator. It is not the same as if...else
The fact that the python community is proposing and approving PEPs like this is evidence they have a desire for multi expression lambda functionality. They've just made a decision they refused to go back on because of over a decade of pride. They end up rebuilding every use case as an ad hoc piece of syntax.
In PHP, a class is uniquely identified by its name, everywhere. If you define a class Foo, then new $var will resolve correctly either everywhere or nowhere (the name needs to be fully qualified, to avoid namespace headaches).
That's not the case in Python. A class Foo has the same status as any other object. That means you can't rely on its name - it could be replaced by another value. But it does mean you can pass the class itself around instead of its name. Instead of putting the string 'Foo' or the literal Foo::class into a data structure or an argument list, you can just put Foo in there, and call it later.
I think the Python approach is cleaner, but then again, it's what I already knew when I first learned how PHP did it.
Python doesn't need to instantiate classes based on string values, so it doesn't provide an easy way to do that.
Python almost allowing you to take the PHP approach is a bit of a pattern. Python is dynamic enough to let you do a lot of things you shouldn't be doing. Ugly syntax like globals()[var] is usually (but not always) a sign that you're looking for the wrong kind of solution.
This is wrong. Please fully inform yourself before making incorrect claims that I'm being reductive.
Some callbacks may not return a value, but callbacks in general absolutely can and do. (A callback simply need conform to an interface, that interface can be whatever the enclosing code decides it should be).
Either that, or async/await is more powerful than callbacks, because most of the `async` functions I've seen have a return value (coroutines can and do yield results, and async/await is a way of implementing coroutines).
I also don't really think that this solves the same issue as multiline lambdas in all cases. Sure, one of the examples (the case of a double function invocation in a comprehension) can be solved with a multiline lambda (among other ways). But multiline lambdas don't particularly clean up `if (x := re.match(...)): process(x)`. Which I'd argue is the more motivating example.
So sure, this doesn't solve your problem with the language in the best way possible, but it was never really intended to, it just sort of can as a side effect. (I don't really think this is a good thing, mind you, but it is a thing).
Async "callbacks" in Javascript cannot have semantically meaningful return values, because a callback context cannot read the values. EVERY lambda has a return value in Javascript, but this is more an implementation detail of dynamic typing than a statement of intent. They don't have callers that accept values of any type, so they're effectively a function of (Context -> IO ()). They're executed strictly for side effects.
If you have a meaningful return value, it's not properly called a callback. You can google about this because obviously the "spec" is "how it is used in literature and the common vernacular." As an example, the first SO hit on Google I say has top rated answer saying " An async callback function can return a value, in other words, but the code that calls the function won't pay attention to the return value." https://stackoverflow.com/questions/6847697/how-to-return-va...). You can surely find more, this is what distinguishes constructions like "callbacks" from "hooks" (which often return booleans, but may modify internal state passed to them or execute side effects as a callback) and further from "Promises" (which compute a value once and allow useful synchronization on the completion of that computation).
> Either that, or async/await is more powerful than callbacks, because most of the `async` functions I've seen have a return value (coroutines can and do yield results, and async/await is a way of implementing coroutines).
Well of course they are. I don't think I said they were equal in every respect, did I? I was a bit overly specific in talking about how Promises are often just used as callbacks, so sorry if I was unclear.
Async-await is a continuation monad. That's strictly a more powerful construction than a side-effect-only callback.
> I also don't really think that this solves the same issue as multiline lambdas in all cases.
I agree. It solves one very small slice: not having multi-line lambdas for filters and maps, nor a compiler that can fuse them.
> But multiline lambdas don't particularly clean up `if (x := re.match(...)): process(x)`. Which I'd argue is the more motivating example.
This PEP is even more exquisitely awful if that is truly the dominate use case. "We saved a line for the sake of saving a line" is a brutally frivolous reason to introduce that much new machinery.
Async callbacks in javascript cannot return semantically meaningful values to sync code. But the classic "callback hell" is when you have async code calling other async code. In that case, a callback can absolutely return a meaningful value to the other async callback. And yes, that is a common and valid use.
>This PEP is even more exquisitely awful if that is truly the dominate use case.
I don't particularly disagree with you here.
> The click example (I use click all the time, by the way) has completely different methods for introducing other constructs or doing custom logic. The F# version just uses lambdas (a sore spot on Python) to do this
Does it? It certainly makes strong use of pre-defined functions, but those are just functions and one can provide your own functions (even lambdas!) instead.
>You seem to think that's bad because of DRY principles and because it's easy to make an error.
Not because its easy to make an error, but because its unnecessarily verbose. Consider some variants on the same invocation, in relatively handwaved python-y, java-y, and f#-y syntax:
@flag('--flag', '-f',
type=validator,
help='a flag description')
f = new Flag().name('flag').short('f')
.validator(aValidator)
.description('a flag description);
type f =
| [<AltCommandLine("-f")>] Flag of validator
with
interface IArgParserTemplate with
member this.Usage =
match this with
| Flag _ -> "a flag description"
This isn't a knock on F# (necessarily), I'm specifically arguing that the API for the flag validator was badly designed. There's no obvious reason that argu should require Usage to be defined by a match block instead of part of the construction of the flag itself. Especially when the IArgParserTemplate interface must be implemented by all flags. They even have a special annotation to opt out of that interface after implementing it, `Hidden`. (This may be a bit ranty, but you can reduce this to "why isn't Description also an annotation"? (at which point you're doing with annotations exactly what Java is doing with method chaining or python with kwargs).>The same could be done for the F# variant by removing them. That doesn't mean it's a good idea.
My point is that adding newlines is a totally reasonable thing to do if you feel that the python example is overly terse. Its a trivial, straightforward, and often recommended fix.
>I think you misunderstand how this works.
Perhaps I misunderstood. I was responding to the objection that two kwargs had similar names and therefore this was a wart. Such an issue can't be caught at compile time (since the code will be valid either way).
>What am I being "disingenuous" about? Am I engaging with you dishonestly?
Perhaps "unfairly assuming that everyone else is engaging with the same intent that you are" is a better way to put it. I see no signal that the so called "attempt to dismiss it" was such. I think that a much more reasonable way of interpreting it is that "From a cursory analysis, your unsubstantiated claim that F# is obviously superior to python in every way doesn't appear to pan out, can you better substantiate it?" (which they never did).
>Well hopefully this post has helped illustrate some of the considerations that make that comparison more reasonable?
Not particularly. Certainly those are features that might be useful in some contexts, but they're of dubious value in the context of scripting. And the cost is a parsing syntax that is both more verbose and more magical (again, it apparently name-mangles things internally for you just for funsies).
You brought up optparse-applicative, and it looks much more reasonable. In fact, minus haskell's love of inventing extra operators (<>, <$>, <*>), it actually closely resembles what the others are doing (and that the definition of an flag has a semigroup of arguments very closely resembles a set of keyword args for setting flag options). If F# used optparse-applicative, we wouldn't be having this conversation.
But then it ceases to be a callback? It's a one-shot function called within the context of a callback. A use case promises are particularly good at enabling.
No, it is a synchronous callback called in an async context. I think the confusion here is that you think that "callback" and "asynchronous callback" are synonyms when they are not[1]. A callback is simply any function passed into another function to be executed at a later time. An async callback is such a function which will be called asynchronously.
In JS, asynchronous callbacks are not allowed to return values (or well they can return values they just cannot be meaningfully accessed). Synchronous callbacks, however, certainly are.
Yes indeed, promises are incredibly good at un-nesting said synchronous-inside-of-asynchronous code.
[1]: https://en.wikipedia.org/wiki/Callback_(computer_programming...
Because generally what I call those is "higher order functions." For example, by the definition you've offered Array.map takes a "synchronous callback" and that seems quite wrong.
> The then() method returns a Promise. It takes up to two arguments: callback functions for the success and failure cases of the Promise.
Both of these functions may be synchronous, and may return values. Thus
Promise.resolve(2)
.then(x => x + 1)
.then(x => x + 1)
.then(console.log);
Will print 4, not undefined. The documentation also refers to these as "handler" functions, but uses "callback" interchangeably, although less often.Generally speaking, "callback" is only used in the context of the function being executed in an async context, but again there's nothing stopping a sync code from being executed in an async context, and indeed this is pretty common (anytime one invokes a promise, as a simple example).
[1]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
That promise chain is not synchronous. You can prove this by putting your code sample and a following console.log("meep") into a function. The console.log("meep") will always fire before your promise's final handler runs, when you call the function. That's because it's scheduled after your calling function returns.
Try it.
> Will print 4, not undefined.
That's indeed true, but it's a special property of promises that you can extend the construction of a callback with `then` calls. You can never return that value to the calling context. Creating the illusion that you can is what async/await is for. The promise context extended the computation to the final callback, which executed the side effect.
It would be a much neater implementation if the underlying resolve/reject functions that construct the promise were synchronously calling the functions supplied by then. That'd be awesome. Sadly, that's not possible with this implementation, because you can then-extend a resolved promise. But I confess I like how algebraic this implementation is.
It's not wrong to call a then-extension a "callback" if it's the final one in the chain. It's computed value is forever discarded, as ultimately is the value of the entire promise. All promises terminate with a callback who's value is discarded.
As I said, Promises are special and expose a continuation monad, in that they're a way to construct callbacks piece-wise. It's a very useful piece of functionality for JavaScript to have and enables the underlying syntactic transformations that make async/await work.
> And this is wrong -- this is not how Python if statements work.
Huh? Didn't you yourself say I was not talking about if statements:
> What you are talking about is a different kind of expression, similar to a ternary operator. It is not the same as if...else
In any case, I'm talking about the case that goes `TRUE-CASE if COND else FALSE-CASE`, as can be deduced from my typing `TRUE-CASE if COND else FALSE-CASE`.
Scripting is a concept that doesn't actually mean anything other than, "I'm hoping I'm allowed to throw this away asap."
def F0(node): return node.find(SOME_XPATH)
etc. If you're binding the new function to a name anyway, why bother with `lambda` when `def` works just fine?The promise chain is executed synchronously in an asynchronous context. You seem to misunderstand this nuance. `x => x + 1` is a synchronous function. It will always execute synchronously and will return a result when it is invoked and block (within its context) until that invocation is complete. However, its calling context may be paused/pre-empted.
Importantly though, the promise won't be pre-empted during the evaluation of a synchronous function:
function sleep(ms) {
var start = new Date().getTime(), expire = start + ms;
while (new Date().getTime() < expire) { }
return;
}
Promise.resolve(0).then(
(x) => {
for (i = 0; i < 15; i++) {
sleep(1000);
console.log(`doing stuff ${i}`);
}
}).then(console.log);
If you run this in your console and then try to do anything, you will be unable to. You'll be blocked by the asynchronous code (because it never releases). Replace my blocking sleep with setTimeout, a nonblocking alternative, and you'll find that things work normally. You're executing synchronous, blocking code in an asynchronous context.Promises aren't doing anything magical, they're simply syntactic sugar for flattening otherwise nested chains of this (where +1 is a standin for the action that this specific function is taking):
f = (cb, v) => cb(v + 1)
which quickly balloons to f = (cb, v) => cb(v + 1)
(v) => (f(console.log, v))
f((v) => (f(console.log, v)), 1)
f((v) => (f((v) => (f(console.log, v)), v)), 1)
back to something sane: Promise.resolve(1)
.then(f)
.then(f)
.then(f)
.then(console.log)
There's really no major difference between those two constructs (well, promises also provide infrastructure for releasing to another control.All of those are synchronous functions, executed synchronously, in an asynchronous context, and all of them are called callbacks by what is perhaps the most authoritative source on web programming today.
[bar(foo(z)) for z in stuff if foo(z)]
as [bar(y) for y in (z for z in stuff if foo(z))]
or even as [bar(y) for y in filter(foo, stuff)]
... although, I get that `map`, `apply`, and `filter` aren't generally considered pythonic.Overall, I think I agree with you - the new syntax in PEP572 might be handy, but it isn't necessary and I would say that the cognitive overhead of encountering yet another syntax doesn't justify the benefit, much less the technical overhead for the interpreter.
[bar(foo(y)) for y in (z for z in stuff if foo(z))]
(etc.) though, since `bar` takes as input the output of `foo`. This leads to the objectionable duplicate calls to `foo`, hence the new assignment expressions.I like Dunnorandom's
[bar(x) for x in map(foo, stuff) if x]
best for a correct result using existing syntax, or [bar(x) for y in (foo(x) for x in stuff) if y]
if you don't like `map`.