Hacker News new | past | comments | ask | show | jobs | submit login
Running an open source app: Usage, costs and community donations (spliit.app)
216 points by scastiel 22 hours ago | hide | past | favorite | 127 comments





I'm always curious what folks use for their database for things like this. Even though I like SQLite--a lot--my preference has become that the app is generally separate and mostly stateless. Almost always the data is the most important thing, so I like being able to expand/replace/trash the app infra at will with no worries.

Thought about maybe running a Postgres VPS, but I've enjoyed using neon.tech more than I expected (esp. the GUI and branching). I guess the thing that has crept in: speed/ease is really beating out my ingrained cheapness as I've gotten older and have less time. A SaaS DB has sped things up. Still don't like the monthly bills & variability though.


>Almost always the data is the most important thing, so I like being able to expand/replace/trash the app infra at will with no worries.

Have you used SQLite with Litestream? That's the beauty of it. You can blow away the app and deploy it somewhere else, and Litestream will just pull down your data and continue along as if nothing happened.

At the top of this post, I show a demo of attaching Litestream to my app, and then blowing away my Heroku instance and redeploying a clean instance on Fly.io, and Litestream ports all the data along with the new deployment:

https://mtlynch.io/litestream/


> No, my app never talks to a remote database server.

> It’s a simple, open-source tool that replicates a SQLite database to Amazon’s S3 cloud storage.

That was a very long walk to get to that second quote. And it makes the first quote feel deceptive.


S3 is acting as a backup, not a data store. The SQLite file is local to the app. This recent post discusses the tradeoffs in more detail and includes metrics. https://fractaledmind.github.io/2024/10/16/sqlite-supercharg...

Thanks for the feedback!

Can you share a bit more about why you feel it's deceptive?

The point I was trying to make is that database servers are relatively complex and expensive. S3 is still a server, but it's static storage, which is about as simple and cheap as it gets for a remote service.

Was it that I could have been clearer about the distinction? Or was the distinction clear but feels like not a big difference?


I'm currently using SQLite + Litestream with one app, though it's strictly Litestream as a backup/safety net and I'd be manually standing the thing back up if it came to building the server anew, as that's not automated.

If anything, I'd probably end up looking at a dedicated PG VPS. I've started to get used to a few Postgres conveniences over SQLite, especially around datetimes, various extensions, and more sophisticated table alterations (without that infamous SQLite 12-step process), etc. So that's been an evolution, too, compared to my always-SQLite days.


This is a well written guide, thanks!

Cool, glad to hear it was useful!

Spinning up a VPS for things like this is tempting to me too, but not having done significant backend work in over a decade my worry would be with administering it — namely keeping it up to date, secure, and configured correctly (initial setup is easy). What's the popular way of handling that these days?

Every case is different but as a baseline, you could use Ubuntu or Debian for automatic security upgrades via unattended-upgrades[0], harden ssh by allowing only pubkey authentication, disallow all public incoming connections in the firewall except for https traffic if you're serving a public service, everything else (ssh, etc) can go over wireguard (tailscale makes this easy). Use a webserver like nginx or caddy for tls termination, serving static assets, and proxying requests to an application listening on localhost or wireguard.

[0]: https://wiki.debian.org/UnattendedUpgrades


It's trivial to run mysql (or Perforce variant) or Postgres, with some minor caching for simple apps.

I'm not sure what you are hitting that would go past the capacity of a small vps.

Independent VPs for DB make sense, but if the requests are reasonably cached, you can get away with it (and beef up the backups) especially if it's something non-critical.


Definitely considering a dedicated Postgres VPS. I've not looked yet, but I'd like to locate a decent cookbook around this. I've installed Postgres on a server before for playing around, and it was easy enough. But there are a lot of settings, considerations around access and backups and updates, etc. I suspect these things aren't overly thorny, but some of the guides/docs can make it feel that way. We'll see, as it's an area of interest, for sure.

I went through this around a year ago. I wanted to postgres for django apps, and I didn't want to pay the insane prices required by cloud providers for a replicated setup. I wanted a replicated setup on hetzner VMs and I wanted full control over the backup process. I wanted the deployment to be done using ansible, and I wanted my database servers to be stateless. If you vaporize both my heztner postgres VMs simultaneously, I lose one minute of data. (If I just lose the primary I probably lose less than a second of data due to realtime replication).

I'll be honest it's not documented as well as it could, some concepts like the archive process and the replication setup took me a while to understand. I also had trouble understanding what roles the various tools played. Initially I thought I could roll my own backup but then later deployed pgBackrest. I deployed and destroyed VMs countless times (my ansible playbook does everything from VM creation on proxmox / hetzner API to installing postgres, setting up replication).

What is critical is testing your backup and recovery. Start writing some data. Blow up your database infra. See if you can recover. You need a high degree of automation in your deployment in order to gain confidence that you won't lose data.

My deployment looks like this: * two Postgres 16 instances, one primary, one replica (realtime replication) * both on Debian 12 (most stable platform for Postgres according to my research) * ansible playbooks for initial deployment as well as failover * archive file backups to rsync.net storage space (with zfs snapshots) every minute * full backups using pgBackrest every 24hrs, stored to rsync.net, wasabi, and hetzner storage box.

As you can guess, it was kind of a massive investment and forced me to become a sysadmin / DBA for a while (though I went the devops route with full ansible automation and automated testing). I gained quite a bit of knowledge which is great. But I'll probably have to re-design and seriously test at the next postgres major release. Sometimes I wonder whether I should have just accepted the cost of cloud postgres deployments.


I tested the app and found it awesome that it doesn't require account creation! You just get a private link, share it with the group, and when they open the link, it asks who they are to 'log in' as themselves. Of course, users could game the system by logging in as other members, but I think it's a compromise the developer made, knowing the user base and how frictionless it makes the user experience. Neat.

I've used this really neat website called when2meet[0] for community organizing heavily. You make an event, get a url and share it. Users choose their name and they can even add a completely optional password to prevent impersonation

I'm heavily inspired by it and working on an app for book clubs to host "elections" to choose their next book to read using a variety of voting systems (ranked choice, approval, scored, first past the post, etc).

[0] https://www.when2meet.com/


kittysplit.com has had this feature set for a decade. I try to promote it whenever I can

Re your question on saving costs: If you run it on a single Linux VPS, then I suspect you can get the costs down to 5-10$ per month.

One thing I find interesting is the growth chart: It's linear. But given that the app clearly has some traction, and is viral in nature, how come it isn't exponential?


I was thinking exactly this. I am the maintainer of ntfy.sh and my costs are $0 at the moment because DogotalOcean is paying for it 100% because it is open source. It would be around $100, though I must admit it's quite oversized. However, my volume is much much higher than what is described in the blog.

I suspect that the architecture can be improved to get the cost down.


Same thought, I am absolutely blown away by how much vercel overcharges. I host a similar application on netcup (like hetzner) for 3$ per month, and when it was on HN, it easily handled over 10k requests per hour

Best I’ve been able to do is around $22 a month on DO, would love to hear alternatives that are cheaper

Hetzner my lord :)

Pi 5 + Cloudflare

I run a homelab that isn't too far from this, but I wouldn't recommend it without a few caveats/warnings:

- Don't host anything media-heavy (e.g. video streaming)

- Make sure you have reasonable upload speeds (probably 10+ Mbps min)

- Even behind Cloudflare, make sure you're comfortable with the security profile of what you're putting on the public internet

The min upload speed is mostly about making sure random internet users (or bots) don't saturate your home internet link.


Oh yeah definitely don't try this unless you have fiber and your ISP isn't too twitchy.

My suggestion is mainly for static site hosting since the Pi only needs to update the cloudflare cache when you make changes, and it should be able to handle a small db and few background services if you need them.


Any guides or blogs on how to do that?

Loads, but it'll depend on what you want to do exactly. I think this should be the approximate list of things:

- domain at Cloudflare set up to cache requests (this will take the brunt of the site traffic)

- static IP at home (call your ISP)

- port forwarding on your router to the Pi for 80 and any other ports you need, maybe a vlan if you're feeling like isolating it

- a note on the Pi that says "don't unplug, critical infra"

- the same setup on the Pi as you'd do on a cloud server, ssh with key pairs, fail2ban, etc.


The pay only what you use model is nice when your revenue also scales with use. For my projects I wish there were plans with higher fixed cost and risk only in availability and not in cost.

The only downside with that I've found is people and orgs tend to overestimate their future usage of X across the board, so profits rarely match expectations with pay-as-you-go, and tier-based pricing will easily overcome you by capturing more $$ from the market. Some notable exceptions are things like file storage where people tend to underestimate what they will need I find.

I love the idea of this but, given the traffic numbers, this could run on a $4 Digital Ocean droplet and have the same result. They've burnt over a grand just to use vercel. Maybe I'm just older but I don't understand the logic here. A basic VPS, setup once, would have the same result and would be neutral in cost (it's how I run my own little free apps). Maybe the author is lucky enough that $100/mo doesn't really affect them or they're happy for it to pay for the convenience (my assumption).

Running a database accessed that many times on a $4 Digital Ocean droplet? I'd be very curious to see that ;)

The web hosting costs basically nothing. Most of the cost comes from the database.


6k visits per week * 5 page views per visit is one view per 20 seconds on average. Even very modest hardware with naively written application code should have no problem handling thousands of CRUD database queries per second (assuming every query doesn't need a table scan or something).

Modern computers are mind-bogglingly powerful. An old laptop off eBay can probably handle the load for business needs for all but the very largest corporations.


So many people don't seem to understand how efficient modern machines are.

As someone who is literally using old laptops to host things from my basement on my consumer line (personal, non-commercial) and a business line (commercial)...

I can host this for under 50 bucks a year, including the domain and power costs, and accounting for offsite backup of the data.

I wish people understood just how much the "cloud" is making in pure profit. If you're already a software dev... you can absolutely manage the complexity of hosting things yourself for FAR cheaper. You won't get five 9s of reliability (not that you're getting that from any major cloud vendor anyways without paying through the nose and a real SLA) but a small UPS will easily get you to 99% uptime - which is absolutely fine for something like this.


As DHH said somewhere, it's incredible that the modern cloud stack has managed to get PROGRAMMERS to be scared of COMPUTERS. Seriously, what's with that? That shouldn't even be possible?

If you can understand programming, you can understand Linux. Might take a while to be really confident, but do you need incredible confidence when you have backups? :)


Far too late for the edit window, but the keynote[1] is an absolute must-watch for anyone who does anything related to web development

[1] https://www.youtube.com/watch?v=-cEn_83zRFw


Not just somewhere, but in Rails World 2024 Opening keynote, and it was absolutely hilarious!

Especially with that meme he showed about vercel is laws +500% markup lmaoo

Don't be afraid of computers, don't be the pink elephant!


The problem is that my coworkers are morons who seem incapable of remembering to run a simple `explain analyze` on their queries. They'd rather just write monstrosities that kindasorta work without giving a single damn about performance.

It seems like computers are getting more capable, but developers are becoming less capable at roughly the same pace.


> It seems like computers are getting more capable, but developers are becoming less capable at roughly the same pace.

"Andy giveth, and Bill taketh away."

Computers keep getting faster (personified as Andy Grove, from Intel), and software keeps getting slower (Bill Gates, from Microsoft).


> It seems like computers are getting more capable, but developers are becoming less capable at roughly the same pace.

And that makes perfect sense. Why should humans inconvenience themselves to please the machine? If anyone’s at fault, it’s the database for not being smart enough to optimize the query on its own.


At my last job, we had architects pushing to make everything into microservices despite how absolutely horrible that idea is for performance and scalability (and understanding and maintainability and operations and ability for developers to actually run/test the code). The database can't do anything to help you when you split your queries onto different db instances on different VMs for no reason.

I heard we had a 7 figure annual compute spend, and IIRC we only had a few hundred requests per second peak plus some batch jobs for a few million accounts. A single $160 N100 minipc could probably handle the workload with better reliability than we had if we hadn't gone down that particular road to insanity.


> ... microservices despite how absolutely horrible that idea is for performance and scalability

Heh, remind me of a discussion I had with a coworker roughly 6 month ago. I tried to explain to them that the ability to scale each microservices separately almost never improves the actual performance of the platform as a whole - after all, you still need to have network calls between each service and could've also just started the monolith twice. And that would've most likely even needed less RAM too, even if each instance will likely consume more - after all, you now need less applications running to serve the same request.

This discussion took place in the context of a b2e saas platform with very moderate usage, almost everything being plain CRUD. Like 10-15k simultaneous users making data entries etc.

I'm always unsure how I should feel after such discussions. On the one hand, I'm pretty sure he probably thinks that I'm dumb for not getting microservices. On the other hand... Well... ( ꈍ ᴗ ꈍ )


Oh man. Guess you're one of those. If you're too lazy to do your job then maybe look into other lines of work. Or go work on database systems to create that magical optimizing query engine you think should exist.

That’s besides the point I’m making. Technology should develop towards simplifying humanity’s life, not making it more complicated. It’s a good thing we don’t have to use clever loop constructs anymore, because compilers do it for us. It’s a good thing we don’t have to obsess about the right varchar size anymore, because Postgres‘ text does the right thing anyway.

It’s a systemic problem. You’re going to loose the battle against human nature: Ever noticed how, after moving from a college dorm into a house, people suddenly manage to fill all the space with things? It’s not like the dorm was enough to fit everything they ever needed, but they had to accommodate themselves to it. This constraint is artificial, exhausting to keep up, and, if gone, will no longer be adhered to.

If a computer suddenly becomes more powerful, developers aren’t going to keep up their good habits performance optimisation, because they had those only out of necessity in the first place.


You're right but I'll play devil's advocate for teaching purposes:

* Usage won't be uniformly distributed and you may need to deal with burst traffic for example when a new version is released and all your users are pulling new config data.

* Your application data may be very important to your users and keeping it on a single server is a significant risk.

* You're users may be geographically distributed such that a user on the other side of the world may have a severely degraded experience.

* Not all traffic is created equal and, especially paired with burst traffic, could have one expensive operation like heavy analytical query from one user cause timeouts for another user.

Vercel does not solve all of these problems, but they are problems that may be exasperated by a $4 droplet.

All said I still highly encourage developers to not sell their soul to a SaaS product that could care less about them and their use case and consider minimal infrastructure and complexity in order to have more success with their projects.


Is this really playing the devil's advocate though? I know this is a simplification but Stack Overflow launched on a couple of IIS servers and rode their exponential growth rather well. Sure they added more than "a couple" of web servers and improved their SQL server quite a bit, but as far as I recall they didn't even shift to CDN until five or six years after they grew. Eventually they moved into the cloud, but Spliit doesn't even have a fraction of the traffic SO did in its early days. As such I don't think any of the challenges you mention are all that relevant in the context aside from having backup. Perhaps also some redundancy by having two $4 droplets?

Is the author even getting paid for their services though? If they aren't then why would they care? I don't mean that as rude as it sounds, but why would they pay that much money so people can use their product for free?


> may be geographically distributed such that a user on the other side of the world may have a severely degraded experience.

Okay, am I crazy or can you not really solve this without going full on multi-region setup of everything? Maybe your web server is closer to them but database requests are still going back to the "main" region which will have latency.


* that's just static files. Even a $4 droplets will hardly ever get into issues serving that, even with hundreds of simultaneous requests.

* Okay, I guess that means we should use 2? So that's $8 now.

* Vercel really doesn't help you there beyond serving static files from cdn. That hardly matters at this scale, you should keep in mind that you "only" add about 100ms of latency by serving from the other side of the globe. While that has an impact, it's not really that much. And you can always use another cdn too. They're very often free for html/js/css

* Burst traffic is an issue, especially trolls that just randomly DOS your public servers for shits and giggles. That's pretty much the only one vercel actually helps you against. But so would others, they're not the only ones providing that service, and most do it for free.

Frankly, the only real and valid reason is the previously mentioned: they've likely got the money and don't mind spending it for the ecosystem. And if they like it... Who are we to interfere? Aside from pointing out how massively they're overpaying, but they've gotta be able to handle that if they're willing to publish an article like this


People use Vercel ... because...

...haven't worked it out yet, all I can come up with is "they don't know any better".

Surely that can't be true?


My understanding is that DO VPS’ are underpowered (as are VPS offerings from most other VPS vendors). Dollar for dollar, bare metal stuff from Hetzner, OVH, etc are far more powerful.

That said, I completely agree-a $4/month DO VPS can run MySQL, and should easily handle this load; in fact I’ve handled far bigger loads in practice.

On a tangent: any recommendations for good US-based bare metal providers (with a convenience factor comparable to OVH, etc)?


> good US-based bare metal providers

The times I've needed it, DataPacket (not based in US, but has servers in the US) and Vultr (based in the US) have both been good to me.


In a trademark travesty, I must ask DataPacket.com or DataPacket.net?

Sorry, I was referring to datapacket.com

Thank you so much. I’ll take a look at these.

Hetzner is of course not U.S. based, but has expanded to have 2 U.S. sites (Oregon i think, and Virginia)....so that could be an option maybe. Caveat: i have not leveraged Hetzner in the U.s.....so can not speak to their quality.

That’s news thank you. I’ll check this out.

Uh, actually at a quick glance, seems the U.s. sites are more for their cloud offering and maybe not bare metal servers.....i think (sadly): https://www.hetzner.com/cloud

My open source service, lrclib.net, handles approximately 200 requests per second at peak (yes you read that right, it's approximately 12000 requests per minute) on a simple €13 Hetzner cloud server (4 AMD based VCPU, 8GB RAM). I'd love to write a blog post about how I made it possible sometime in the future, but basically, I cheated by using Rust together with SQLite3 and some caching.

I was surprised by the cost of Vercel in that blog post too, which is why I dislike all kinds of serverless/lambda/managed services. For me, having a dozen people subscribing to $1-$2/month sponsorship on GitHub Sponsors is enough to cover all the costs. Even if no one donates, I’d still have no trouble keeping the project running on my own.


> Running a database accessed that many times on a $4 Digital Ocean droplet?

How many times per second is the DB actually accessed? As far as I can tell my the metrics, they're doing ~1.7 requests/minute, you'll have a hard time finding a DB that couldn't handle that.

In fact, I'd bet you'd be able to host that website (the database) in a text file on disk without any performance issues whatsoever.


I didn't mean it quite so insultingly, but yes, even a very modest server would handle that kind of load easily. You're not particularly high throughput (a few requests per second?) and I imagine the database is fairly efficient (you're not storing pages of text or binary blobs). I think you'd be pleasently surprised by what a little VPS can do.

I think it would be fine. I run a little private analytics service for my own websites. That service isn't as busy but handles ~11k requests per month. It logs to a SQLite database. It does this on a little Raspberry Pi 400 in my home office and it's not too busy. The CPU sits at 1% to 3% on average. Obviously there are a lot of differences in my setup but I would think you could handle 10x the traffic with a small VPS without any trouble at all.

You can read a little bit more about my analytics setup here:

https://joeldare.com/private-analtyics-and-my-raspberry-pi-4...


It’s surprising that you ask for advice on this topic in your blog, but then are very dismissive (complete with sarcastic wink) to the advice?

You could run it on Cloudflare workers for free with plenty to spare. You get 5M reads/100k writes per day on D1.

OTOH If you want managed Postgres it seems like you always have to pay a fairy high minimum.


Running the 800th most popular website in the world (25-50M pageviews per day) on a 1GB VPS (Spring Boot, MariaDB, Redis)

Very possible.


"that many times". It's nearly zero traffic.

https://f5bot.com/ was free for like 8 years and it processed hundreds of thousands of db records a day, and it barely cost anything.

I'll email this to you, but you could save a ton of money using a serverless database solution like Supabase or NeonDB.

This is cool, dude. Thank you for sharing. Irrespective of the actual numbers I’m always curious how people fund projects like this.

One thing I’ve been interested in is the idea of decentralized handling for this. That is, the project is funded in and every month if its bills don’t get paid it dies. If it receives enough to go over it buys T-bonds for the appropriate duration and then burns them down over time.

Perhaps in the past it would have to be automated but I wonder if in the near future a limited AI agent could be the server and you leave him alone to do his thing.


Would be interesting to see how this compares to https://splid.app/

Well this is opensource for one.

What they need is a payment provider integration so you can ACH or credit card pay immediately. That can also be a monetisation option for them.

There are dozens of other apps that do that already, and I don't think this one needs to follow. Staying open source, free, and convenient for cash transactions is better in my opinion.

You just invented money laundering

Not really, the KYC is usually done on the payment layer depending on which payment platform you use. If you are doing your own ACH, yes, you will need KYC. But if you are using something like stripe connect or dots.dev then KYC is their problem.

Do I read something wrong, or does the stats amount to ~400 daily visitors with ~2500 page views per day? That's about ~1.7 requests per minute... And they pay $115/month for this?

I'm 99% sure I'm reading something wrong, as that's incredible expensive unless this is hosting LLM models or something similar, but it seems like it's a website for sharing expenses?


I think this is just the natural conclusion of the new generation of devs being raised in the cloud and picking a scalable serverless PaaS like Vercel as the default option for any web app.

A more charitable reading is that they pick the technologies that the jobs they want are hiring for, even if they don’t make sense for this simple application.


> I think this is just the natural conclusion of the new generation of devs being raised in the cloud and picking a scalable serverless PaaS like Vercel as the default option for any web app.

I'm not sure, I'm also "new generation of devs" I suppose, cloud had just entered the beginning of the hype cycle when I started out professionally. Most companies/individuals at that point were pushing for "everything cloud" but after experiencing how expensive it really is, you start to look around for alternatives.

I feel like that's just trying to have a "engineering mindset" rather than what generation you belong to.


> ...after experiencing how expensive it really is, you start to look around for alternatives...

One would think that to be the common sense case...but, in corporate America - at least the last handful of companies that i worked at - some companies are *only now just getting work loads up to the cloud now*...so they have not yet felt the cost pain....Or, in other cases, other firms are living in the cloud, have seen the exorbitant costs, but move waaaaay toooo sloooow to migrate workloads off cloud (or hybridize them in smart ways for their business)....Or, in even other cases that i have seen, instead of properly analyzing function and costs of cloud usage - and truly applying an engineering mindset to matters - some of these so called IT leaders (who are too busy with powerpoint slides) will simply layoff people and "achieve savings" that way.

Welcome to being a technologist employed at one of several/many American corporations in 2024!


Certainly, I just mean that we are hitting a point where there can be professional devs, with multiple years of experience at tech companies successfully building software, who have only ever known and worked with a PaaS to deploy an app.

It's frustrating too because deployment technologies and tools continue to get better and better. It's never been easier to deploy an application + database to some arbitrary computer. You can do it declaratively, no SSH, no random shell scripts, no suspicious fiddling.

Also, sidenote: but for small stuff you can just deploy in your home. I've done it before. It's really not that scary, and odds are you have a computer laying around. The only "spooky" part is relying on my ISP router. I don't trust that thing, but that can be fixed.


>It's never been easier to deploy an application + database to some arbitrary computer. You can do it declaratively, no SSH, no random shell scripts, no suspicious fiddling.

May I ask, what you are using?


> new generation of devs being raised in the cloud

I unfortunately sorta put myself in this category where my PaaS of choice is Firebase. For this cost-splitting app however, what would you personally recommend if not Vercel? Would you recommend something like a Digital Ocean Droplet or something else? What are the best alternatives in your opinion?


Yes, I believe a Droplet or VPS (virtual private server) from some other provider would be sufficient. Digital Ocean isn't the cheapest, but it's pretty frictionless, slick, and has a lot of good tutorial articles about setting up servers.

You'd have a Linux machine (the VPS) that would have at least 3 programs running (or it is running Docker, with these programs running inside containers):

- Node.js

- the database (likely MySQL or PostgreSQL)

- Nginx or Apache

You'd set up a DNS record pointing your domain at the VPS's IP address. When someone visits your website, their HTTP requests will be routed to port 80 or 443 on the VPS. Nginx will be listening on those ports, and forward (aka proxy) the requests to Node, which will respond back to Nginx, which will then send the response back to the user.

There are of course security and availability concerns that are now your responsibility to handle and configure correctly in order to reach the same level of security and availability provided by a good PaaS. That's what you're paying the PaaS for. However, it is not too difficult to reach a level of security and availability that is more than sufficient for a small, free web app such as this one.


Could you continue on about security and availability? This is exactly the gentle intro I've been looking for.

I'm guessing rate limiting, backups, and monitoring are important, but I'm not sure how to go about it.


I don’t think that the difference is $110/month, but surely reading that you realise there’s a lot more going on there than “point vercel at a git repo and you’re done”. I don’t know how long it would take me to install docker and configure the above, but it’s certainly a few hours. I tried vercel for the first time a few weeks ago, and I had a production ready site online with a custom domain in about 5 minutes.

I’ve commented here before that on AWS (which I’m fairly familiar with) I could set up ECS with a load balancer and have a simple web app with rds running in about 30 minutes, and literally never have to touch the infra again.


I'm an old-school PHP web developer, and my immediate thought is to go to OVH or similar and get a VPS running Ubuntu. A quick run of sudo apt install lamp-server^ and I'm ready to go.

Or they're optimizing for not being a sysadmin, which some people can't do and even some of the people who can find to be very ungratifying work. For a project that runs on this person's enthusiasm, that seems not crazy.

It's certainly possible to spin up your own db backup scripts, monitor that, make sure it gets offsite to an s3 bucket or something, set yourself a calendar reminder to test that all once a month, etc... but if I had to write out a list of things that I enjoy doing and a list of things that I don't, that work would feature heavily on the "yeah, but no" list.


Yes, if you don't want to do that work and are happy to pay someone else to take care of it, then that is great. But if you like making free web apps, relying on a PaaS will get expensive.

If you become a sysadmin, not only do you save $100 per month but you can also add it to your CV.

DHH (Rails founder) thinks you should dare to connect a server to the internet: https://world.hey.com/dhh/dare-to-connect-a-server-to-the-in...

(I already submitted this once, but given the discussion here, I think it's worth posting again, if my rate limit allows it)


> The merchants of complexity thrive when they can scare you into believing that even the simplest things are too dangerous to even attempt by yourself these days.

Awesome first sentence! I know I'm going to agree with the article just by that. This applies to so many things in life, too. We're been taught that so many things people routinely did in the past are now scary and impossible.


> you can also add it to your CV

That can backfire and give an employer the idea you want to do that work though. I not only hate it, but nobody gives a damn until stuff breaks and then everyone is mad. You rarely get rewarded for stuff silently sitting there and working.

edit: to be clear, I think doing it yourself once is great experience. And I've run small web apps on a single server, all the way from supervisord -> nginx -> passenger -> rails with pg and redis. I'd rather build features or work on marketing.


Yeah, I'm confused too. Running some sort of VPS would totally do the job, no?

I'm fairly sure you could host this on a last-gen Raspberry PI at home, if you live close to where your users are :)

Definitely don't need a last gen. Someone else did the math upthread and came to one request every 20 seconds, which if you factor in burstiness and that if you have a particularly bad burst that slows down the system a little, the next request will take even longer etc. (ask me how I learned that lesson), it's probably good to budget for it handling multiple requests per second. For this application, my understanding is you've got a handful of people in your group that you're splitting a couple of expenses with, so data processing is small beans and that'll definitely run on a first gen Pi if you optimise it properly, or perhaps a 2nd-3rd gen if you don't want to spend the time

I've been going down the VPS rabbit hole lately since I have some toy projects I want to host and really don't like the unpredictable pricing model of these "pay as you go" providers like Vercel. E.g. I really love Supabase but it's hard to justify jumping straight to the $25/month plan in combination with Vercel costs.

I was surprised how extremely easy it is to get set up with Coolify on a Hetzner VPS, which has preset install options for NextJS + Supabase + Posthog + many others. And I get the standard autodeploy on commit functionality. The open-source versions are missing some features, and I don't get the slick Vercel admin interface, but for a pet project it works great. I'm also by no means a sysadmin expert, but with ChatGPT it's pretty easy to figure things out now.


The inefficiency is bonkers but understandable. I could host this app for like ~$60/year, generously, with little to no devops work. It's painful to see the creator paying out of pocket for such a great project because the Vercel marketing introduced such massive inefficiencies to the ecosystem.

Even less when I pay for a dedicated machine running all of my hobby projects. Gratuitous Kamal 2 plug. Run your personal projects all on one machine.


Where would you host this for $60 a year?

I would use hosting with SSH access. I am based in Poland so we have MyDevil.net But also you can just rent VPS for 5$, but you have to care about setting everything up.

First thing I thought while reading was Firebase - it's interesting how much it would cost there.


hetzner cpx11 in Ashburn - 150 ms latencies to Europe are totally fine for this use case. with 15k groups and 162k expenses (guesstimating 30k users, email logs per-expense, etc.) , you're not even pushing 2 gigabytes of disk space (conservatively), nor are you doing anything computationally expensive or stressful for the DB under normal load. With decent app & db design, like proper indexing, 2 vCPUs and 2gb RAM is more than enough.

So that's how vercel makes their money.

That’s, and a bunch of twitter “indie hackers” who get traffic spikes that result in hundreds of dollars bills. Seriously, just get a VPS

Their marketing team needs a raise.

For reference, 100 dollars a month gets you this bare metal server on hetzner: Intel® Core™ i9-13900, 64 GB DDR5 ECC, 2 x 1.92 TB

... Should be more than enough to handle 2 requests per minute, could probably handle 100x of that.


My i5-6600k at home can handle ~15k requests per second for a toy social media app with postgresql assembling the xml to send to the client (though I've done some batching optimization and used rust for my application server to hit that). Passmark cpubenchmark suggests a 13900 should be 6-8x more capable than that.

So it should be able to handle somewhere in the ballpark of 2,000,000x the required load, or maybe 100,000x without the application level optimization.

(TLS reduces this by a factor of ~10 if you're doing handshakes each time. Despite what blogs claim, as far as I can tell, if your CPU doesn't have QAT, TLS is very expensive)


If you're on Hetzner you can get a load balancer with TLS termination for $5/month. It's hidden in the cloud category but fully supports dedicated servers.

Of course doing SSL on the server itself is more secure, but if that's a performance bottleneck the load balancer can be a cost effective compromise


Yes Cloudflare and all of that but they’ll do it for free.

Then you get to determine gains you may get from caching and other potential optimizations from one of the best eyeball connected providers in the world. Oh plus the ability to fend off the largest DDoS attacks ever seen.

Cloudflare tunnels enable you to do all of this through an encrypted tunnel without exposing the machine/services to the internet at all. Cloudflare will still MITM all traffic but so does Hetzner (obviously). At least with the tunnel the connection is persistent so you don’t incur TLS handshaking, etc CPU overhead with each client connection.

Bonus points - you can move hosting providers without any hassle, configure hosting provider redundancy (Hetzner + whoever), all of that stuff.


Yet another testimony to how utterly few people are willing to pay for what they use in the abuse system called "open source". People, start charging for your work, and leave the freeloaders behind!

> A short disclaimer: I don’t need donations to make Spliit work. I am lucky enough to have a full-time job that pays me enough to live comfortably and I am happy to give some of the money I earn to the community.

And this is why open source will finally die, because being comfortably employed while still having surplus time and energy to work for free is an increasingly rare thing among the younger generations.

A better way to "give back to the community", instead of making open source software, would be to purchase software from other indie developers.


> People, start charging for your work, and leave the freeloaders behind!

We already have a profit-oriented market. And we have empirical evidence that profit-oriented markets do not like open source (for their primary products).

> being comfortably employed while still having surplus time and energy to work for free is an increasingly rare thing among the younger generations.

edit: remved anecdote

The cost of living will never rise so much that the upper 50% can't easily make enough money. (Otherwise what? The other 150 million people go homeless?)

And unless our industry sees a major shift, which I don't see happening, software engineers will continue being comfortably in the upper 50%.


> We already have a profit-oriented market. And we have empirical evidence that profit-oriented markets do not like open source (for their primary products).

That's a given. If you open source your code, other developers will steal it and sell your software. Just like billion dollar tech companies are the main benefiters of open source today, that some guy made for free. Excuse me, I meant for $42 in donations.


When I write open source libraries I consider the ones benefitting to be the general public.

Even if my libraries were used only by mega corporations (which they aren't) there would still be a benefit to the public: If companies have lower cost, they will charge lower prices, benefitting customers / the general public. (And yes, they will lower prices. Most markets are not monopolies.)


Open source never benefits the general public, because open source developers never make a product polished and user-friendly enough to be usable by the general public.

Instead, open source mainly benefits other developers. But at the end of the chain there has to be a product that is of use for non-developers. Because developing isn't for developments sake. And the person who makes that product reaps all the monetary benefits from the work that the others have made.

If FOSS people made complete products which were end user friendly, I'd buy the argument of benefitting the general public.


> developing isn't for developments sake.

[citation needed]

> the person who makes that product reaps all the monetary benefits from the work that the others have made

Which means that they can offer their product for a lower price, which then benefits the general public.

Companies being able to operate cheaper / more efficiently does benefit the general public, as long as the market isn't a monopoly. And as per my above comment, most markets are not monopolies.

> open source developers never make a product polished and user-friendly enough to be usable by the general public

I've been using Audacity, Gimp, Inkscape, uBlock Origin, and many others long before I knew what FOSS means. Spliit is also pretty cool ;)


That's a very interesting perspective, thanks. :-)

That's why I make all my software AGPL now.

I haven't published any software at all recently. But if I did (anything non-trivial), it would be AGPL. Or even SSPL.

Permissive licensing (MIT, BSD, Unlicense, public domain, etc) is a scam to make you work for companies for free - if your software is worth anything to them, that is. They told developers they should use MIT licenses so more people would use their software. That's true. They didn't ask whether that was a good thing.


If you don't want to give away software, and it sounds like you don't, then. Don't.

Perhaps you're under the impression that I blindly click the button for a permissive license? No. I read it first. I know what it allows. That's why I choose it.

I think it's nice when companies make money, for the record. Pays for houses, puts food on the table, sends kids to summer camp and college. Some of them even make a lot of money. That's fine too.

If they want to use my software in the process, more power to them. That's why I put my name next to the copyright notice, on a license which says in plain English that they can do that.


Did you know that every dollar a company makes gets taken away from someone? It's zero-sum if you aren't close to Jerome Powell. Why assume the dollar is better in the hands of the company owner than whoever had it before?

The economy is not a zero sum game. We (non-politicians/non-billionares) have significantly more resources than we had 100 years ago, and we will have significantly more resources in 100 years than we do now. And open source developers are a small part of why.

Every dollar a company makes is given to them by someone, in exchange for something else.

I go to the supermarket, they don't take my money. I pull it out of my pocket and swipe it into their coffers. They have food, you see. Which I can eat. Unlike money.

Strange how a game you describe as zero sum has built the prosperity of the modern world. I wonder if something is missing from your understanding of how that game is actually played.


"Open source will finally die" said on a website likely running on some linux-based server, with some JS frontend, some open source/commercially licensed DB, and communicating with protocols regulated by a non-profit organization. Also in the future maybe reading this page from a device using a RISC-V processor. Sure.

I hope it brings a tear of joy to the corner of the eyes of those selfless FOSS programmers that they've done their share to help Y Combinator be worth $600 000 000 000. That money is surely better spent on people who deserve it better.

If anything is dying it's proprietary software. Which is great for all of us because open source is vastly more efficient system.

To be fair they're not heavily soliciting donations, and even actively say that they don't need them. So it's not surprising people don't prioritise giving anything. Many users probably haven't even thought of it

Are you the guy with the https://osspledge.com/ billboards around San Francisco? Haha. They’re funny. I enjoyed the art. If it’s actually you, I’d be curious who the illustrator is or if you used generative AI.

I'm not that guy, I'm against open source and free loading. Why would multi million dollar CEOs give anything to FOSS programmers when they're developing their crucial infrastructure for free?

Work for free for huge companies so they can make billion dollar profits while at the same time demanding unionization. Refusing to sell your work to consumers who are willing to pay, yet happy to provide free tech support to free loaders who wouldn't give you a cent. What's the logic?


> What's the logic?

Some of us like making things, and are happy to share our excess production with the world.

Like any other good work, it does not require acknowledgement or reciprocation, and the benefits are not part of a zero-sum economy where the giver is harmed by any action of the receiver.

You're on record as being vehemently anti-OSS. Why does it offend you so much that other people prioritize forms of compensation differently than you do?


> You're on record as being vehemently anti-OSS.

That's true, I'm the chief anti-OSS crusader on HN and online. I'll give it a rest after this thread, to breathe and give all a chance to recover strength.

> Some of us like making things, and are happy to share our excess production with the world.

Selling those things is still sharing with the world. Most paid software is cheap to purchase.

If FOSS was an eco system where end users had the common(?) courtesy to donate just a little bit to at least one of the projects they use, then I'd have nothing to say. But whenever I use any FOSS code and donate, I usually find myself alone with two or three other people who have donated.

Unlike most other professions, programming is something most people start with as a hobby in young years. So maybe they don't value their own hard work and effort, even though they've matured past the young hobbyist phase? And then they get misguided by open source activists to labour for free.

A young artist who publishes their songs online for free in the hopes of becoming famous, will still retain copyright on those works. No record label can come around and start selling those songs without even letting the artist know. Much less stealing and selling the songs of a well-established artist if he/she decides to release music for free.

I just don't like free loading, and I don't like enablers either.


Selling a thing comes with greater obligations than giving it away.

I am unwilling to accept those obligations, in most cases.

I am, however, perfectly happy to share some of the work that I do back into an ecosystem which I have benefited from. I also volunteer for organizations I care about, and I pick up litter in public parks. :)

I do not believe that I am being exploited. The Internet is and always has been built on open source -- and as bad as the Internet is, it would be worse if it didn't exist or if it was a proprietary network.

I think you're taking a real problem (funding of valuable work) and exploding it into an argument against open source, which just doesn't follow for me.

I do 100% support finding a way to monetarily compensate people who do valuable work and contribute it to the world. Theoretically. Practically, it gets messy real quickly and I don't see a good broad solution.


> I am unwilling to accept those obligations, in most cases.

This is the argument I keep hearing every time a discussion about open source boils down, and I think it is wrong. Because in truth there is no big commitment if you sell some software for $10 or $20. In worst case if it doesn't work for the customer, you give a refund. When you go out to buy a sandwich or a couple of beers for $10, do you think they are worried about any commitment? No, it's "Here you go, enjoy!". You won't have any more obligations than you are willing to take on, just like open source.

> I also volunteer for organizations I care about, and I pick up litter in public parks.

Would you pick up litter that a mega-corp is dumping in the woods, while they keep dumping more and laughing at you?


Not to be "that guy" but...

To clarify some confusion in this thread, it might be helpful to distinguish "open source" (the application) from "free" (this hosted instance of the application). Munging the two together might lead to some incorrect conclusions. Running a "free" application for others is going to have certain costs. The cost of running an "open source" application is going to depend entirely on the resources that application consumes, which, if run privately, might be a lot less.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: