IGA’s Newest Integrations: The Game Crafter & IndieGoGo

We’re pleased to announce new integration points with two of the powerhouses of the indie game publishing world: IndieGoGo and The Game Crafter!

IGA’s support of IndieGoGo means that members using IGG as their crowdfunding platform of choice can now count on the same reliable backer/status updates, campaign shares, and all the other features Kickstarter users have come to love. In the past, we did the best we could to manually emulate what our automated processes do, but our new API access and integration means IndieGoGo campaigns are first-class citizens on the Indie Game Alliance website.

Our Game Crafter implementation is small for now, but will be expanding over time. You can now import game details from TGC just as you would from BoardGameGeek, using the Studio Game Catalog found at https://www.indiegamealliance.com/account/studio/games.  Simply paste in your game’s URL and click Load Data.

 

Origins Game Fair After-Action Report

Greetings! We just closed the book on our first Origins experience — and what an experience it was! We’ll be talking more about some of the specifics, and posting pictures, in the coming days, but we wanted to get something out in the Monday newsletter about the results of the show.

We’re presently in our hotel in Columbus waiting on the monsoon to die down before we begin the trek back. We’ll be driving back to Florida all day Monday, and most of Tuesday will be spent processing Origins financials, getting the stock re-situated in the warehouse, and catching our breath, so the warehouse and customer support will remain closed until Wednesday, June 21.  

Minions will be glad to know we’ve got some new games and expansions in the truck, which will be in the inventory when we get home. Fans of Brotherwise Games, Shoot Again Games, Breaking Games, Cohio Games, Noble Quest, and Leder Games should get their Loot Points ready!

We’d like to welcome a few new members that joined us at the show: Purple Potato Games, Flatworks Games, and Noble Quests. Noble Quests dropped off a few copies of their RPG, Mystic Forces. Flatworks and Purple Potato both have Kickstarters in progress now — Dwarven Smithy and Burst My Bubble, respectively. Links are in the newsletter and on indiegamealliance.com; check them both out! We also recruited a few new Minions at the show; welcome to you as well!

Victoria and I would like to extend an extremely warm thanks to Jason Gough, Nathan Knight and Mark Miller, who made ridiculously long drives of their own on very short notice to help us staff our Origins presence. You guys are all-stars! Thanks also to all the Minions and members who dropped by the booth to say hello – there’s too many to list you all, but we enjoyed meeting, or reuniting, with you all!

We had a few very productive conversations with some of our existing partners, the fruits of which I hope to be able to announce soon. We also had four or five very good conversations with potential new partners, and will be following up with them after the show to put the final touches on new discounts and benefits we’ll be able to offer our members in short order. We also chatted with several convention representatives that invited IGA to exhibit at their upcoming shows, which we’re excited about as well.

We didn’t intend to run a game library at Origins, so we brought only open copies of what we had to sell so that we could show potential buyers the components. However, our 10×10 booth was pretty cramped, so we had nowhere to store those games. We ended up having more Minion support at the show than we anticipated on short notice, so we ended up using the games and the extra team members to run an impromptu game library in the main gaming hall. Our team will be submitting demo reports and photos of these efforts when they get settled back in at home, probably no later than Thursday or so.

Sales were, generally speaking, quite weak at this show. We did very little business Thursday and Friday (which we heard was nearly universal) but picked up steam a bit on Saturday and Sunday. We’ve got about $1900 in payouts going out to a variety of members as soon as the credit card payments get deposited into our account from Square. If you received a real-time sales notification, we owe you money! Please take a moment and review your payment information on file — we’re missing it for about half the members who had games sell. Without this information, we cannot pay you! We’ve had a request to offer a “daily digest” mode for the sales notifications, and I’ll see if I can get that done on the site before Dice Tower.

We are pleased to report that IGA is already booked for Origins 2018 with at least another 10×10 booth, if not more. Hopefully knowing a year in advance will translate to a stronger presence, and a better bottom line for IGA.

That’s the good news. The bad news: while we haven’t done the final numbers yet, it’s looking like IGA took a beating financially by attending Origins this year. This is in part due to the short notice of the show, and as a result, very few in-booth demo sales. We’re planning on posting a more detailed breakdown of this in a few days when all the numbers are done. Members can, of course, help us out by purchasing demo time in our dedicated IGA demo and sales room at Dice Tower Convention in three short weeks and Gen Con in August. We’re still looking for volunteers for both shows as well.

That’s a wrap on Origins until we get home and get everything processed. See you next year, Columbus!

On the road to Origins Game Fair!

Greetings!

By the time this newsletter is emailed, we’ll be somewhere in southern Georgia en route to Origins Game Fair. Come see us this Thursday through Sunday at booth #151! If you’d like to make an appointment to spend some time with us for an interview or other discussion, come on by the booth and we’ll set something up. Presently, Thursday is looking like our busiest day of the show.

While we’re at the show, we’ll likely have limited access to email, social media, and other means of contact. While we will do our best to clear our emails and other messages nightly in the hotel, this may not be possible, and we appreciate your patience if we cannot get back to you until we return from the show.

A reminder to Minions – the IGA warehouse is closed until we return from the show. You may place orders in the store, but they won’t ship until we return. Members — if we’ve got stock to sell of your games, keep an eye on your emails; with our new store software, you’ll get an email in real time when one of your games finds a new home with an excited gamer!

Didn’t get your demos booked in time? Don’t wait — Dice Tower convention is in three short weeks, and Gen Con is coming up too! Book now on your Studio Dashboard before we sell out!

Announcing Minion Certifications!

We’re stoked to announce an often-requested new feature for member publishers: certification programs! IGA Pro members can now link a rulebook and/or a how-to-play video to any game in their Game Catalog and create a custom certification exam for it.

The exam editor interface is very similar to our easy-to-use survey module, so it should feel immediately familiar to IGA Pro members.

You can specify whether a certification is optional or required for demos – although we strongly recommend that members not set certification as mandatory except in situations where there are frequently tournaments with high-value prizes. This is, however, a simple and intuitive way for you to communicate the basics of gameplay to Minions, and for Minions to show off their expertise. Our Minions already do a fantastic job of quickly picking up and mastering game rules!

Minions can complete the courses online in their own time – there’s no need to schedule a session with a trainer. Simply watch a video – in most cases, the same how-to-play video from the Kickstarter campaign – and answer a few simple questions.

You’ll get a cool little certificate you can print out, and the publisher will get an email letting them know someone else has mastered their game. Your certifications will appear on your public Minion profile — and there might even be some new achievements associated with certifications.

Members: Link your Kickstarter Profile to Your IGA Account

Hey there! We’ve added a quick new feature that will help us out a lot in promoting our members’ Kickstarter campaigns: you can now link your Kickstarter profile to your IGA account. You can find the field to do this on your Studio Profile page at https://www.indiegamealliance.com/account/studio/profile.php.

Your Kickstarter profile link should look like https://www.kickstarter.com/profile/your_profile_name. To find it, log into Kickstarter, click your avatar in the upper right hand corner to open your account menu, and select Profile to visit your public profile. 

Just cut and  paste the link from your browser’s address bar into the field on your IGA Studio Profile, click Save, and you’re all set!

IGA will use this information to proactively start sharing and supporting your current and future Kickstarter campaigns without the need for you to let us know before launch. It’s one more thing off your plate during the busy pre-launch period, and it makes sure that IGA can get to work promoting your project on day one, every time.

 

 

 

Rerranging the Furniture…

As you may have noticed, we’ve reorganized the site a fair amount in the last few weeks. Most of the pages and menus look just as they did, but many of the page addresses (URLs) have changed. We’ve been doing this in stages to try and minimize any potential fallout from moving all the pages around at once. So far, we’ve seen extremely minimal ramifications, as we’re being very careful and deliberate in what we’re doing. Of course, if you see anything that isn’t working quite right, please don’t hesitate to let us know.

Breaking the site up into modules as we have will make for more readable links and make life for our web development team much easier going forward. We’ve also been using this time to clean up and update older, legacy code, so everything should be running just a little bit faster and better. We know this isn’t terribly exciting for our members compared to new features (trust me, those are coming) but our code team is positively giddy to have this work done, and everything we do from now on will get done faster, easier and better as a result of these changes. Of course, website work will likely be on the back burner a little bit through the summer due to convention season.

Wherever possible, we’ve added redirects from the old page addresses to the new. We’ll leave these up for a month or two, until we see people stop hitting them. We strongly encourage you to update any bookmarks / favorites you may have in your browser, and check any links you may have added to your Web site, Facebook profile, or other digital platform. If you’ve printed any changed URL in any game material, contact us about leaving a permanent redirect in place so the links don’t break.

Thank you for your patience as we make this change. There’s no great time for upheaval and reorganization, but it’s almost always better once you get through it. Thank you also to those Minions who have been helping us test the changes and identifying problems.

IGA’s Store is Live!

After literally years of design and development, IGA’s game store is finally live!

We’ll be using this store to supply our Minions with games and swag for their demos, as well as to launch a new micro-distribution service to help get member games into friendly local game stores around the world. The new software also delivers an incredible amount of transparency to our process; members can now see what we have and where it’s going in pretty much real-time.

We notified the Minions of the store’s launch on April 8, and we’ve already processed more than 25 orders. Minions, if you haven’t gotten out to run some demos in a while, now would be a fantastic time to do so!

Members, we’re sure you have questions. No worries! We’ve put together an interactive guide to help you get a handle on everything having to do with our new store software, stock policies, and the tools available at your disposal. We strongly recommend that all member publishers check this out, especially if we already have stock on hand from you.

Please direct any questions or concerns to our warehouse team at warehouse@indiegamealliance.com. Thank you so much to Victoria, Sherri, Jason, and all the other Minions who helped out with testing and development.

 

IGA is going to Origins Game Fair!

We’re thrilled to announce that after three years on the waiting list, IGA has just acquired a booth at Origins Game Fair this year! Miracles happen after all! The show’s in just seven weeks, so there’s a ton to do to prepare and not much time to book your demos and get us sales stock!

Because this is such a short notice booking, we’re going to shorten the IGA Pro exclusivity window to two weeks for this show, after which point it will become available to all members.

In celebration of this new opportunity, and to replenish our coffers from this surprise expense so we can get hotels and rent a truck for stock transport, IGA has not only reduced the pricing on demos for Origins, but put demos for Gen Con and Dice Tower Con on sale as well! Demo time has been reduced by $20 on off-peak days and $25 on peak days. They will certainly go fast at this pricing, though, so get yours now!

We’re Back! What Happened, and What We’re Doing About It

Greetings. I’d like to take a moment to talk about the complete service outage we experienced on April 4, 2017, what happened as best I understand it, and what we’re going to learn from the experience and do better in the future.

First off, the most important point:  we’re back up and running, and all checks I’ve been able to do since the server came back up have indicated we lost no data. I’m in the process of catching up on emails and such we didn’t get during the outage now.

On April 3, I was doing some development on the server and noticed it was running awfully slow. I should have said something to support, but I figured my code wasn’t very well optimized yet, it was late and I was tired, and I left it alone. When I went to bed on April 4 at 5:15AM, the server was slow, but functional. When I woke up at 9:00AM, it was completely down. I reached out to the support team at our hosting provider immediately.

IGA runs – currently – on one virtual private server, which is basically a private pie-slice of a massive server’s resources. Everything’s supposed to be redundant and backed up and magical. For those who aren’t up on datacenters and stuff, commercial-grade servers don’t just have one hard drive like your desktop computer does; they generally have 4 or 6 drives at minimum, and they can be configured such that multiple drives have a copy of the data, because you don’t wanna lose stuff. The idea is that, because drives fail, you can lose a drive or two and still be OK, you just have to replace that drive and carry on; the array just heals, as long as you don’t lose a bunch of drives all at once. We’re actually on a SAN, which is a giant disk array with many disks, shared by multiple servers.

SANs are managed by a special controller card that is in charge of reading and writing the data. Turns out, our SAN’s controller card was dying, and as it did, it was effectively ruining swaths of the drives. The slowdowns I was experiencing on April 3 were probably the SAN trying to find a usable copy of the data on other drives after the first went down. At 7:32AM on April 4, the datacenter team realized what was happening and pulled the plug. This took our server offline, but also protected our data.

The datacenter team then replaced the bad controller card and built a whole new SAN for us and the other sixty or so virtual servers that were using it. Now comes the real challenge: putting the data back. Since there was no complete copy on any one set of disks on the old SAN, because of the corruption, the datacenter team had to write a custom script to basically scour all the disks and reassemble the data from the still-usable bits. Between writing the script and then the very slooooow process of pulling the data back given that they were using the SAN in a way it wasn’t intended to be used, and because we’re talking about terabytes of data across all the affected customers, this took a while. We first got indications that indiegamealliance.com was back online at approximately 1:18AM on April 5.

Most companies would have just turned the server back on with missing data and tell people, “if you didn’t have a backup, tough luck,” but the crew at HostDime went above and beyond as they always do. I’m pleased to report that all checks I’ve been able to do since the server came back up have indicated we lost exactly no data. As an aside: HostDime is a fantastic company to host with specifically for reasons like this – and IGA Pro members save 10% when they use our referral code.

While we dodged a bullet here, there was a period of time where the HostDime staff didn’t know if they could restore our data, and that left me spending the morning assembling our backups and seeing what all we had and didn’t have on recent copies. All in all, I could have restored about 96% of our data, which isn’t catastrophic, but that’s not good enough for me. I did a full audit of our backup situation, and here’s what I found.

IGA’s collected information footprint is big: about 12 gigabytes worth. Once more people start using the new print and play hosting feature, that will reach hundreds of gigabytes in very short order. It’s impractical to just download it every day to have a backup. I was signed up for a “remote backup” service with the hosting provider, but it only has a certain amount of backup space. The problem is that apparently, there’s a setting you have to turn on that tells the backup process, “delete old backups when you run out of room, to make room for new ones.” To my shock and dismay, that was not turned on, so the newest backup we had there was from November 30, 2016. I had received email warnings about them, but they were cryptic Linux system logs and I didn’t understand them well enough to decipher what was wrong and fix them. I am a coder, not a systems administrator, and the emails generally say stuff like “A system event has occurred” as opposed to “Your backups are broken! Fix them now!” So, that backup would have been good good for some of the older, legacy stuff, and a few other non-IGA static domains I host, but not terribly helpful for IGA as a whole. This was the primary line of defense, and it was basically useless in this case.

This vulnerability has now been fixed, with the aid of HostDime staff. The pruning issue has been fixed, making sure it’ll never fail for disk space. We also found an error in one of our testing databases that was giving the backup process some fits, so we took care of that as a precaution as well. I’ll be downloading the generated backups on a regular basis so that we have extra copies, even after the old ones are deleted from the online storage.

Without a comprehensive backup of the site, it was time to hit application-level backups, which really should be the last line of defense.

I work on IGA’s website code out of a Google Drive share, which means I have the current code stored in Google’s cloud. Because I use the Google Drive desktop app, I also have a fully-synchronized copy of my Google Drive on every machine I use, which is two laptops, my desktop workstation and my phone. Additionally, I check in code to GitLab.com after every major revision, so that’s yet another, albeit slightly outdated at the moment, copy. So, the IGA code was fine, and we would have suffered no data loss on that front.

Most IGA staff members use GMail as our mail client, for its handy Android integration, but it has another super bonus feature – it stores a copy of all our email when we check it. So other than emails that have been received during the outage itself, we appear to have lost nothing on that front. Hooray, clouds-talking-to-clouds!

A few weeks ago, I read about a massive database failure at a large company, made worse by lack of backups, and I had a bit of a panic attack and started implementing a plan to improve the IGA backup structure, which was not at all as robust as I’d have liked it to be (as you’ll see.) It’s a very, very good thing I did. Focusing on the most important stuff first, I wrote a script that executes every 4 hours that effectively takes the main IGA database, compresses it to a file, and sends it to me securely offsite. So, I had a snapshot from 4:20AM. The 8:20AM snapshot never came, so we knew the shutdown occurred between 5:15AM and then. (It was actually 7:32.) Fortunately, not a lot of data moves around during those hours, so very little if anything would have been lost. I have accelerated the frequency of this backup to hourly as an extra precaution.

We didn’t have one of our supporting databases – the WordPress installation that runs our little news blog – included in our backup. We would have been able to recover most of the posts through other means, and it wouldn’t have been a huge deal to lose, but adding that database to the backup regimen we’re already backing up was trivial and was omitted purely as an easily-corrected oversight on my part. This has now been done as well, with a frequency of twelve hours (as it’s very rare we make more than one post in a day, this should be plenty, and we don’t want to fill up the backup drives with endless identical copies.) WordPress has a built-in backup option, but it costs money to use and WordPress is such a small portion of what we do that I’ve never activated it.

So that’s code, email and databases secured, which are the biggest things. Now, on to data files. This one’s a toughie. We store image files that are linked to games, demo report pictures, and user avatars (publisher logos, mostly) and very recently started accepting uploads for print and play files. Because I don’t use them in the development side, they aren’t included in the backups of the code, and because they aren’t stored directly in the database, they aren’t in the database backups either. We were counting on the backups from the remote FTP (the ones that hadn’t successfully run since November 30) to get those, and we didn’t have a backup for the backup yet (more on this below.)

The system administration team at HostDime had a full-system backup of the SAN from February 4 of this year, so no files our users uploaded to the site before then would have been missing. Of the remaining ones, I would have been able to write a script to re-download the missing game and company logo files from BoardGameGeek (which is where we got most of them in the first place), but demo report pictures, PNPs and such uploaded post-February 4 would have been lost. This appears to have been the most damaging potential impact of this event.

Immediately after the server came up, I took a snapshot of the entire images directory, PNP cache and other user-uploaded content to make sure we had a current copy. That, like my code, is now on Google Drive for safe-keeping. We don’t yet have an automated solution for keeping this data backed up, but I’ll do it manually every couple of days until we do.

So, where do we go from here?

At present, we feel pretty good about the backup strategy for the code itself; as I said, it exists in real-time on four devices I control, plus in two different clouds (GitHub and Google Drive), plus the production server itself. That said, the “full server” backup strategy doesn’t differentiate between a code file and an image file, so all the improvements we make to the full server strategy will provide even more security for the code.

The database backup seems to have been pretty effective, but I don’t like how much I’ve held my breath today wondering if it was there, current, and would restore properly, so we’re going to do some more about that as well. First off, as I mentioned, we’ve increased the frequency to hourly and added in the support databases like the WordPress stuff. As with the code, it’ll also get swept up in any full-server backup strategy we employ.

Before this incident, as part of my panic mode a few weeks ago, I provisioned a second server intended to serve as a live backup for the database. Every time someone alters the production database, it would also send that change to the backup server such that it remains current to within a few seconds. In database lingo, we call this replication. Problem is, I didn’t have it fully set up yet, so it wouldn’t have helped us in this event. I had it scheduled for next week.

Once the replicated server is in place, the hourly snapshots will become a tertiary backup to the full-system snapshots HostDime takes for us and the replicated server as the second line of defense. The snapshots will also provide extra options for restore if something goes wrong with the replication, and help protect us from “bad query” results – stuff like accidentally deleting all the games instead of the one I meant to because I typed the wrong query. (Replication doesn’t generally help here, because the replication server will happily execute the bad query you just wrote as well if you don’t catch it fast enough.)

I’m going to look into the feasibility of having the code and image files automatically synchronized with the replication server as well. In theory, we could then promote the replication server to be the new “production” environment in the event of an outage if something like this were to happen again – the service would be much slower (because I can’t afford redundant high-powered equipment) but we’d at least be limping along until the primary came back up.

We’re going to take a two-tiered approach to the full-server backup. First off, we’ve turned on the auto-pruning feature so that the backup process we’re already paying for will start working again and continue to work even after the disk fills up. I’ll be writing a script to automatically download those snapshots, so I can archive them myself locally. Copies I can physically lay my hands on at need give me the most warm and fuzzies. This will provide yet another copy, and permit me to have older snapshots on hand even after the online backup has been purged. This will also give me more visibility in the form of big blaring alarm emails if something goes south with the backup strategy. This will get enormous, but if I have to throw the old ones away after a year or keep a bucket of USB drives around once they get super old, it’s a small price to pay.

I’ll also be employing an “always ask support if I get a nasty-gram from cPanel” approach, so that obtuse errors in log files I struggle to understand will no longer lead to real-world consequences.

Our hosting provider is spinning up a cloud hosting platform with much greater data redundancy (because it’s on a gigantic SAN). Having a server there, if it doesn’t get much traffic, is just a few bucks a month, and at that price, why not? I’ll be looking into using this to set up a second fully redundant server there as soon as that platform is available (HostDime says it’ll be a few weeks.)

In summary, once this all is done, our backup strategy will look like this:

  1. Fully-replicated servers that can be promoted in the event of an outage. Data-complete to within a few seconds of real-time.
    1. Already-provisioned smaller VPS (coming soon)
    2. Cloud virtual server (coming soon-ish?)
  2. Weekly/monthly HostDime-provided backups
    1. Will be FTPed to a location HostDime can access for rapid restores
    2. Will be downloaded and archived locally at IGA HQ
    3. It’s entirely possible that HostDime will also have backups of the backup server(s) to pull from in an emergency as well.
  3. Application-level backups
    1. Code:  4 local machines, Google Drive, GitHub [Frequency: real-time]
    2. Email: GMail [Frequency: within 1-2 minutes of real-time]
    3. Databases: Snapshots on Google Drive [Frequency: hourly for primary DB, twice a day for secondary DBs]
    4. Data files: None. We’re relying on one of the methods in Tiers 1-2. Have any other ideas for something cost-effective? I’m all ears. We’re considering third-party solutions like BackBlaze here.

I’m confident that these strategy improvements will be affordable and relatively simple to set up with help, and will make sure we never experience significant data loss again. And the takeaway for us all: If you don’t have at least three tested backups of something, assume you don’t have any.

We apologize profusely for the inconvenience of all of this, and give you our word that we will use this experience it as an opportunity to learn and improve. We’d also like to thank the staff at HostDime, specifically Kevin, Pat, Aric and Joe, for working more than 18 hours straight to save our bacon.

Update 1: April 5, 2017 6:00PM Eastern

We’re moving fast on making these improvements. Today’s accomplishments so far include:

  • Adding the WordPress database to the list of application-level backups
  • Increasing frequency of the primary database backup from every 4 hours to every hour
  • Correcting the prune operation and adjusting all settings on the primary server backups to be more efficient and effective
  • Cleaned out the old backups to make space
  • Got a bit of an education on how the backup automation process works
  • Increased full-system backup frequency from weekly to daily
  • A few days at a time worth of full-system backups will now be archived on the server itself as well as on a remote archive within HostDime
  • An initial copy of all 4GB worth of user-uploaded PDF and image files was conducted early this morning so we have at least one temporary backup
  • A full system backup test was conducted and successfully performed

I’ll be driving up to the local Best Buy in about 20 minutes and picking up a 3TB external USB drive, which I’ll then be automating downloads of the full-system snapshots to nightly for extra-extra-extra protection and for the ability to retain longer-term archives of data without needing to pay for expensive server space. Once that’s done, our passive backup strategy will be in place, and we can start working on the active (redundant server) setup. That will take a little longer than just copying files, but we’ll let you know as we’ve gotten something up and running.

Thank you to all for your patience and encouragement as we’ve gone through this important, painful step toward being a more “big-league” operation. 😉

 

Update 2: April 6, 2017 10:07AM

At this time, a new external hard drive has been installed at IGA HQ, and a script running on it is now automatically pulling the nightly full system backups down off the FTP server for long-term archival and retention, ensuring we’ll be able to keep full daily snapshots for nearly a year, and then archive those down to maybe one snapshot a month once they’re than 6 months old or some such.

This is the last piece of our disaster recovery backup strategy, barring any improvements suggested by the community. We’re now 100% confident that we could come back pretty much unscathed from losing the server. The next step is active recovery, which is our real-time redundant servers. Those are going to take a little longer to set up and configure, so we’ll be wrapping up a few projects we had started before this occurred, and then focusing our development efforts on that.