Most active commenters

    ←back to thread

    1743 points caspii | 29 comments | | HN request time: 0.824s | source | bottom
    1. cybice ◴[] No.27431350[source]
    As webdeveloper I have a strong feeling that we are writing web for google bot and not for people. For any website I created I have a list from SEO what to add. Like 200 links at each page bottom, different titles, headers, metas, human readable urls without query params, all that canonical urls, nofollow rules etc. Most of this things invisible to users and created only for googlebot.
    replies(12): >>27431423 #>>27431497 #>>27431591 #>>27431615 #>>27431679 #>>27431701 #>>27432029 #>>27432041 #>>27432191 #>>27433272 #>>27436384 #>>27437560 #
    2. pjc50 ◴[] No.27431423[source]
    Well, yes, because googlebot is the gatekeeper of popularity and income for websites. Got to appease the decision maker.
    3. hyperhopper ◴[] No.27431497[source]
    The problem is, is that the internet at its conception was just a way to host content, not a way to discover content. When discovery was done via word of mouth or extra-internet means, the websites themselves were just for the people that viewed them.

    Now, when the website needs to not only contain content, but also be its own advertisement, writing it in a way that will maximize virality is the natural course of action to make sure the site actually gets seen.

    This will likely be true until a method of finding webpages that is not based on automated scraping or the page itself.

    replies(1): >>27431884 #
    4. ridaj ◴[] No.27431591[source]
    Much of it driven by cult cargo SEO, throwing everything and the kitchen sink into the page in completely unproven hope that it'll somehow game the rankings
    replies(3): >>27431771 #>>27432732 #>>27433007 #
    5. ricardo81 ◴[] No.27431615[source]
    Some of the technical SEO is good though, like simply making the page crawlable and content being in a logical order.

    The "fiddle with H1" or "write X amount of words" or "buy Y number of links with a % of anchor text" is silly.

    replies(2): >>27431648 #>>27431782 #
    6. tomcooks ◴[] No.27431648[source]
    > Some of the technical SEO is good though, like simply making the page crawlable and content being in a logical order.

    Semantic HTML has been created to help screen readers and browsers understand content organization, it having been hijacked by SE is just a side-effect.

    replies(2): >>27431673 #>>27432822 #
    7. ricardo81 ◴[] No.27431673{3}[source]
    The point I was meaning to make is that it's quite easy to make a site uncrawlable, and therefore unfindable in search engines.

    e.g. Google always had problems indexing Flash websites. It historically had issues with sites heavily relying on Javascript. Nowadays it's less of a problem, at least for Googlebot.

    8. apples_oranges ◴[] No.27431679[source]
    As a mobile developer (sometimes) I rarely/never see apps that don't have Google SDKs bundled either..
    replies(1): >>27436018 #
    9. Cthulhu_ ◴[] No.27431701[source]
    On the other hand, Google over the years has tweaked their algorithms and recommendations to match up with what makes a good site, in terms of content and markup.
    10. lvncelot ◴[] No.27431771[source]
    Even (or rather, especially) if every SEO advice is correct, it still means that Google effectively has a lot of control over the shape of the modern web, alone through indirect pressure via SEO.
    11. lvncelot ◴[] No.27431782[source]
    Yes, but even for those, it means that we are left to hope that what's good for a crawler and what's good for e.g. a screen-reader will still align in the future. Right now it feels almost coincidental.
    12. Sharlin ◴[] No.27431884[source]
    On the contrary, the Web, being a hypertext system, was definitely always about discovering content. If you found an interesting website, it would typically link to other interesting sites. There used to be ways to systematize these ad-hoc linkings, such as Web rings. And the first attempts to catalogue and categorize the contents of the (then tiny) Web were in the form of human-curated directories à la Yahoo. It’s just that in just a few years it became apparent that this approach could not scale, and search engines based on automatic crawlers became the norm – but again, critically, these too are of course fundamentally dependent on the Web’s discoverability by following hyperlinks!
    replies(2): >>27432003 #>>27433762 #
    13. stinos ◴[] No.27432003{3}[source]
    Yeah I also don't really remember this extra-internet thing. Perhaps the author is talking about a very early period of the internet (which I don't know)? What I rememeber was that before 'real' search it was indeed what you describe, just endless chain of links of one site to the other and sites aggregating links.
    replies(1): >>27443297 #
    14. hliyan ◴[] No.27432029[source]
    I know that it's being done, but I don't know if it's necessary. I frequently find good old unstyled HTML pages from the 90's internet (the ones with Prev/Next/Up links, like this: https://tldp.org/LDP/abs/html/here-docs.html) at the top of Google results.
    replies(1): >>27433119 #
    15. Mauricebranagh ◴[] No.27432041[source]
    Apart from stuffing 200 links in the footer why is this bad?
    16. growt ◴[] No.27432191[source]
    Human readable Urls don't sound that bad.
    replies(1): >>27434200 #
    17. spiderfarmer ◴[] No.27432732[source]
    I am running every SEO advise as an experiment before implementing it across my network and a lot of advise actually brings results.
    18. dspillett ◴[] No.27432822{3}[source]
    Though a useful side effect of SEO people finding it to be a useful side effect, is that what they are doing for their gain may help overall accessibility (where too often the opposite is the case, when people trying to game systems accidentally affect accessibility, it is usually negatively).
    19. Jenk ◴[] No.27433007[source]
    It is cargo cult but it's cargo cult because it is the way to "success". Company A have great page ranking, and blog about how they think they got there. Company B also have great page ranking, but think they did something different to Company A, so they blog about it, too. Everyone else reads both blogs and intersects what both companies did, and implement those changes. Iterate for every difference you encounter and voila.. you now have your rubber stamp SEO method.
    20. pbhjpbhj ◴[] No.27433119[source]
    I didn't check but to my recollection that domain is pretty old, domain age is supposed to be a principle metric for trust (which in turn is a strong signal for page rank). So, ...

    I mean it's pretty reasonable, if a site has been around a long time it's going to be generally 'good'.

    21. mercury_craze ◴[] No.27433272[source]
    It never seemed important to the CEO of a previous company I worked for that we had something to say, only that we gave off the impression that we had something to say. We hired an outsourced blog writing service to fill our wordpress instance with generic, inoffensive platitudes and listicles poorly cribbed from Wikipedia and the ONS. Squint a little bit and you could convince yourself there was value to it, but nobody with any experience in the problem space would treat it as anything more than marketing fluff. His hope was always that one day we would get rewarded by the great Google algorithm and appear on the first page for search terms we were convinced our users were looking for, but the end result was that our blog was largely designed to be read by robots.

    It's the same thing as the tweaks you have to perform for SEO optimisation, some have questionable value to the end user but you jump through the hoops anyway because it's what is done, by pleasing the robots you're rewarded with a higher search position.

    replies(2): >>27433838 #>>27447224 #
    22. kevincox ◴[] No.27433762{3}[source]
    This works well for random exportation, or exportation of related topics, however it is basically useless for finding information on a new topic as you don't have anywhere to start.

    The only way would be to keep finding links like Wiki Game and hoping to get closer to the intended target. Luckily there are huge robots who have done this for you and can tell you which links lead to your destination.

    23. loonster ◴[] No.27434200[source]
    Extremely useful for when a link dies and there is no useful archive.
    24. atatatat ◴[] No.27436018[source]
    That sounds...like a great reason not to get into "mobile" dev and stick to PWAs.
    25. novaleaf ◴[] No.27436384[source]
    worse: paid content farms / ai to generate crap "articles" by the boatload, targeting every organic search term 5 different ways.

    The result is that ACTUALLY USEFUL articles are buried on page 5. Any slightly helpful bit of content in the top articles are repeated (using different grammar of course) in all the other "top" articles.

    26. tootie ◴[] No.27437560[source]
    What I tell every client is that 90% of SEO is in writing good, relevant content. Technical SEO is more like housekeeping. Adding footer links is redundant if you have a sitemap and good navigation. If your users can find stuff easily, crawlers can too. The biggest technical things that I make a stink over are canonical URLs and https.
    27. KMag ◴[] No.27443297{4}[source]
    Also, I remember web rings being helpful for content discovery in the mid-late 1990s. Different authors for a given subject would cooperate with each other and put something like a banner add at the bottom of their page with "next" and "previous" links, so you'd get a doubly-linked list circular ring of cooperating sites for a given subject.
    28. nimbleal ◴[] No.27447224[source]
    Fortunately with GPT3 and the like I’d imagine this approach will soon have had its day. Not that I’m optimistic about whatever will replace it.
    replies(1): >>27510625 #
    29. rhizome ◴[] No.27510625{3}[source]
    My sense is that Jevon's Paradox should mean the blogpap business will explode. Google will be filled with even more pithy, business-topic supporting SEO blather as human writers put GPT to work for a lot more clients than they had before GPT3.