Ourchive Beta: Week One

Well, week one is in the books for the Ourchive beta. We have a huge pile of bugs and to-dos, which is exactly what you want from a beta, so thanks to our 10-ish initial beta testers for digging into site features. Thanks also to thedeadparrot who, in addition to being a fellow fandom Masto moderator (we're in the tongue twister business), submitted PRs for our technical docs and Ourchive itself.

High-level summary

Bugs! Bugs! Bugs! Bugs!

But we knew that was coming. One of the biggest challenges with building this kind of site is you have to plow through a lot of really boring work before you get to implement anything fun. In this case, we had lots of little errors with adding or updating data that only showed up when more people were using the site with greater use case variety; handling certain types of special characters, higher-load AO3 imports, doing things in a slightly different order, etc. It's a lot easier to squash bugs when it's not just one or two people generating them, so breaking shit is absolutely vital work on the beta testers' part.

We also got some great UI feedback, and our primary goal for the next week is turn that feedback into site improvements. We're focusing on problem areas for mobile and on posting; the forms (work, bookmarks, collections) got a workout and we have a ton of feedback on how to improve them.

Still to come: an actual homepage, more documentation, and a tagged release. A very enterprising person on fail_fandomanon set up the site despite no release being tagged and no install instructions existing. That made my day. Thank you, random person. I promise the dev site's email will not be hardcoded on release.

Next week's goals

  • Implement high-value improvements like better work icons & buttons, a better work posting page, and better bookmark + collection posting & updating
  • Implement more unit tests
  • Implement more regression tests (owned by Kate)
  • Complete accessibility audit (owned by verity) & scope and start updates (owned by Imp)
  • Onboard more beta testers (pending the above improvements, as otherwise we anticipate getting a lot of the same bugs/issues that the initial group found)
  • Update the roadmap, as a lot of people have the same kinds of questions that can be answered by a more detailed roadmap (this is a good thing; it means we're correctly anticipating what people will want/need)
  • Identify 'best fit' data for tags & attributes; update demo site & fixtures
    • We've gotten a few comments about tag types that assume the data's hardcoded, so we want to make it super clear that it's not; you can have your tags be whatever you want! By the same token, some of our tags are currently confusing or just not well-suited to the job, so we should be a bit more opinionated about our demo data.

Misc

The site is ugly

Bestie, I know. If you are a UI/UX person, get in touch.

The socials exist so I should use them

Well, technically "the social", singular. We are sending out updates on Mastodon; it's been a bit of a mental shift to doing all this stuff in public, so thanks to everyone for bearing with us while we adjust to "announcing our production pushes" and "keeping people up to date on what we're doing", you know, basic stuff.

What got my ass

DIGITALOCEAN WHEN I FIND YOU.

tl;dr for non-technical people: I screwed up server config in a way that has nothing to do with the archive code, and ended up redeploying with a different provider as a result. Annoying, yes. Embarrassing, undoubtedly. But on the bright side, it gave me a chance to re-run and improve the "install on VPS" documentation we'll be releasing as part of MVP. We live, we learn.

Technical summary:

We had an incident this week where I locked myself out of our DigitalOcean droplet. I'm still not really sure how I did this; it might have been when I was trying to disable root login, or it might have been because I was on my phone's hotspot internet or because I was on the hotel's internet + my own VPN. But, regardless of root cause, DigitalOcean appears to have some kind of networking layer that auto-drops certain connections when you're trying to SSH into your VPS, and in the course of trying to debug this, I...locked myself out of the ourchive.io droplet, seemingly permanently.

In the course of trying to fix this, I also managed to get the server rejecting ALL attempts to reach ourchive.io (and yes, I checked UFW and fail2ban, etc. Nothing doing). This is extremely non-ideal, obviously, and resulted in what's functionally data loss, as I decided reprovisioning the environment would be better for beta testing than spending days trying to recover access. So now I have a powered-off droplet hanging out waiting for me to yell at DigitalOcean. I have never had this issue with Linode and I'm kind of appalled that this is so common with DigitalOcean that their troubleshooting guide recommends simply deleting/rebuilding the droplet if you can't launch their console - particularly since, as far as I can tell, their standard backup tier takes backups weekly (Linode's is daily!). This is an unacceptable level of provider interference for me; if I haven't configured a firewall to drop conections, you should not just randomly give me 'access denied' errors on SSH/SCP. So, we'll be releasing MVP with a generic VPS install guide ("assuming you have Ubuntu 22.04 installed, do this stuff")  and continuing to build & test point-and-click installs. We might try DigitalOcean's managed app platform, since this should be installable with their Python platform, but that's post-MVP/a requirement for 1.0.

Final Thoughts

MVP in 6 weeks feels doable; I am personally very grateful to everyone who's helping make that true.