Python packages seem to often rope in a surprising number of dependencies for relatively limited libraries.
I can easily imagine pulling this package into my work: thank you for keeping the requirements to a minimum!
Python packages seem to often rope in a surprising number of dependencies for relatively limited libraries.
I can easily imagine pulling this package into my work: thank you for keeping the requirements to a minimum!
If only this comment supported case conversion..
In any case congrats on shipping!
On a recent project I spent about an hour trying to do something similar (and far less sophisticated) before I realized it was a problem I had no desire in really solving, so I backed out all my changes and just went with string.capitalize(), even though it didn’t really do what I was looking for. Looking forward to using this instead!
Actually, this library supports conversion of even such strings!
```python
>>> import textcase
>>> textcase.convert("HAppY ApRiL FoOLs!", textcase.case.SNAKE, (textcase.boundary.SPACE,))
'happy_april_fools!'
```
Thanks for the congratulations!
Right now, there's no such GH badge. Since the project will always have zero dependencies, I think we can simply use a static badge like this:
It sounds like you had quite the adventure with text casing on your project. I'm happy this library can save you some time and hassle. Looking forward to see what can be built with it!
https://www.unicode.org/versions/Unicode16.0.0/core-spec/cha...
It does not support non-English title casing. From the documentation:
> It also works non-ascii characters. However, no inferences on the language itself is made. For instance, the digraph ij in Dutch will not be capitalized, because it is represented as two distinct Unicode characters. However, æ would be capitalized
I appreciate your suggestion regarding the name, but unfortunately this name was already taken, so "textcase" was chosen.
I also have ideas for adding dictionary key conversion and other features in the future that will handle more than just strings. In addition, you can use this library to convert cases of Iterable[str] using textcase.pattern
It looks like your library does not support it, which is understandable, it is a huge problem to tackle, but I just wanted to be sure.
I guess handling these nuances falls under the broader categories of internationalization (i18n) and localization (l10n).
My only suggestion is here:
> It also ignores any leading, trailing, or duplicate delimiters:
from textcase import case, convert
print(convert("IOStream", case.SNAKE)) # io_stream
print(convert("myJSONParser", case.SNAKE)) # my_json_parser
print(convert("__weird--var _name-", case.SNAKE)) # weird_var_name
In the case of a conversion target that has delimiters (snake, kebab) it might be nice to have an alternative option to preserve such features but normalise them to the target delimiteri.e.
print(convert("__weird--var _name-", case.SNAKE, preserve=True)) # __weird__var__name
When I say simple use cases I mean that since you probably don't need all of these functions at once that it would be easier to copy the code you need if you don't feel comfortable writing it instead of adding yetanotherlibrary to your dependency tree.
Perhaps document that clearly—it's an important restriction that the library assumes English-language strings. ("no inferences on the language itself is made" isn't quite true since the language is inferred to be English, or to at least follow English-compatible rules for casing)
I suspect you mean "featureful", "full-featured" or similar[1]—"feature complete" means that you're not going to add any more features.
[1] https://english.stackexchange.com/questions/393517/what-do-y...
And that's just in English...
If "text" is in Greek, like "Καλημέρα", the upper form should be "ΚΑΛΗΜΕΡΑ", not a juxtaposition of upper() conversions of each letter.
Yeah, there is such a problem with the naming "text" suggests something different than just a "string".
I guess handling these nuances falls under the broader categories of internationalization (i18n) and localization (l10n).
It also looks to be nice in exploratory data analysis:
df = pd.read_csv(f)
df.columns = map(convert, df.columns, case.snake)