←back to thread

94 points thepianodan | 4 comments | | HN request time: 0s | source

I had a mind-blown-moment when I learnt that Obsidian was built without any frontend JS framework. ( https://forum.obsidian.md/t/what-framework-did-the-developer-use-to-create-obsidian-desktop-application/30724/11 )

The benefits, I can see.

    JS frameworks move really quickly, and when we're working on a large, long-term project, it sucks when big breaking changes are introduced after only a couple of years. Sticking to slow-moving web standards (which are quite mature by now) increases the longevity of a project.

    And the stability also means that more time is spent on delivering features, rather than on fixing compatibility issues.

    There is also the benefit of independence. The project's success is not tied to the framework's success. And it also makes the project more secure, from supply chain attacks and such.

    Because there is no "abstraction layer" of a framework, you also have greater control over your project, and can make performance optimizations at a lower level.

    I feel not using a framework can even make us a better developer. Because we know more of what's going on.
There are benefits to using frameworks too, I'm not here to challenge that.

But this alternative of using none... it seems rarely talked about. I want to learn more about building large (preferably web-based) software projects with few dependencies.

Do you have any suggestions on how to learn more about it? Are there any open source projects you know which are built this way? It needs to be large, complex, app-like, and browser based. I'm more interested in the frontend side.

Thank you!

1. brendanmc6 ◴[] No.45619720[source]
I've abandoned Next.js and React for Elixir / Phoenix. I am able to build a perfectly pleasant user experience with just a sprinkle of vanilla JS via Phoenix hooks.

The fact that I have been able to build a multi-user collaborative editor experience without a single additional dependency is incredible. I previously worked for a well-established and well-funded React team who had this feature on their roadmap for half a decade but still find it too daunting to implement.

Phoenix was a great reminder that a lot of the "frontend engineering" we find ourselves doing as React developers just isn't necessary with the right backend. It's HORRIFIC to look back at all the yakshaving I've done in my career already. Wrangling types (GraphQL, codegen libraries), wrangling queries and data-fetching (react-query, SWR, server components), fiddling with middleware (serverless functions, getStaticProps, CDNs). I've seen teams outright abandon testing because the hours they invested just weren't catching any of the bugs that mattered.

I'm not doing any of that anymore. I'm spending that time refining the core data model, improving test coverage, thinking about go-to-market and making money.

Phoenix may not be a good choice if your product has reached that level of maturity and product-market fit where you really should care about "microinteractions", fine-tuned animations, or advanced use-cases for an SPA like offline support and highly-optimistic UI. But I would argue that even mature products don't truly need these things. Just look at the GitHub UI. I've spent a truly astronomical number of hours in that UI and never wished I had WYSIWYG text editing, or animated skeleton UIs, or the dozen other things that the React community tells us we need.

replies(1): >>45620275 #
2. juliend2 ◴[] No.45620275[source]
I'm curious what is specific to Phoenix that made this so productive for that project? Is the frontend using something like HTMX?
replies(2): >>45622720 #>>45624763 #
3. shawa_a_a ◴[] No.45622720[source]
They're probably using some features of LiveView; I'm not too familiar with how HTMX works, but with LiveView you can define all of your logic and state handling on the _backend_, with page diffs pushed to the client over a websocket channel (all handled out of the box).

It comes with some tradeoffs compared to fully client-side state, but it's a really comfortable paradigm to program in, especially if you're not from a frontend background, and really clicks with the wider Elixir/Erlang problem solving approach.

https://hexdocs.pm/phoenix_live_view/js-interop.html#handlin...

Hooks let you do things like have your DOM update live, but then layer on some JS in response.

For example you could define a custom `<chart>` component, which is inserted into the DOM with `data-points=[...]`, and have a hook then 'hydrate' it with e.g. a D3 or VegaLite plot.

Since Phoenix/LiveView is handling the state, your JS needs only be concerned about that last-mile JS integration; no need to pair it with another virtual DOM / state management system.

https://hexdocs.pm/phoenix_live_view/js-interop.html#client-...

4. brendanmc6 ◴[] No.45624763[source]
The big win for me has been the built-in PubSub primitives plus LiveView. Since the backend is already maintaining a WebSocket connection with every client, it's trivial to push updates.

Here is an example. Imagine something like a multiplayer Google Forms editor that renders a list of drag-droppable cards. Below is a complete LiveView module that renders the cards, and subscribes to "card was deleted" and "cards were reordered" events.

```

  defmodule MyApp.ProjectLive.Edit do
    use MyApp, :live_view
    import MyApp.Components.Editor.Card

    def mount(%{"project_id" => id}, _session, socket) do
      # Subscribe view to project events
      Phoenix.PubSub.subscribe(MyApp.PubSub, "project:#{id}")
      project = MyApp.Projects.get_project(id)

      socket =
        socket
        |> assign(:project, project)
        |> assign(:cards_drag_handle_class, "CARD_DRAG_HANDLE")

      {:ok, socket}
    end

    def handle_info({:cards, :deleted, card_id}, socket) do
      # handle project events matching signature: `{:cards, :deleted, payload}`
      cards = Enum.reject(socket.assigns.project.cards, fn card -> card.id == card_id end)
      project = %{socket.assigns.project | cards: cards}
      socket = assign(socket, :project, project)
      # LiveView will diff and re-render automatically
      {:noreply, socket}
    end

    def handle_info({:cards, :reordered, card_change_list}, socket) do
      # omitted for brevity, same concept as above
      {:noreply, socket}
    end

    def render(assigns) do
      ~H"""
      <div>
        <h1>{@project.name}</h1>
        <div
          id="cards-drag-manager"
          phx-hook="DragDropMulti"
          data-handle-class-name={@cards_drag_handle_class}
          data-drop-event-name="reorder_cards"
          data-container-ids="cards-container"
        />
        <div class="space-y-4" id="cards-container">
          <.card
            :for={card <- @project.cards}
            card={card}
            cards_drag_handle_class={@cards_drag_handle_class}
          />
        </div>
      </div>
      """
    end
  end
```

What would this take in a React SPA? Well of course there are tons of great tools out there, like Cloud Firestore, Supabase Realtime, etc. But my app is just a vanilla postgres + phoenix monolith! And it's so much easier to test. Again, just using the built-in testing libraries.

For rich drag-drop (with drop shadows, auto-scroll, etc.) I inlined DragulaJS[1] which is ~1000 lines of vanilla .js. As a React dev I might have been tempted to `npm install` something like `react-beautiful-dnd`, which is 6-10x larger, (and is, I just learned, now deprecated by the maintainers!!)

The important question is, what have I sacrificed? The primary tradeoff is that the 'read your own writes' experience can feel sluggish if you are used to optimistic UI via React setState(). This is a hard one to stomach as a react dev. But Phoenix comes with GitHub-style viewport loading bars which is enough user enough feedback to be passable.

p.s. guess what Supabase Realtime is using under the hood[2] ;-)

[1] https://bevacqua.github.io/dragula/ [2] https://supabase.com/docs/guides/realtime/architecture