Category Archives: Website

New: Bonus Loot Points for Photos in Demo Reports

Over the last few months, we’ve heard a lot of feedback from our members about our demo reports and ways that they could be more useful. Almost universally, we hear that photos attached to demo reports make them much more valuable. Today, we’ve made a simple but critical change that we hope encourages more Minion photography work.

Effective immediately, demo reports that include an uploaded image will earn 1 extra Loot Point. Like all other Loot Points, it will be doubled for games on Boost and tripled for playtest games.

We’ll also select a Photo of the Month for featuring on social media and our newsletter. The Minion that takes that photo will receive 20 extra Loot Points and a special achievement!

The other major comment we hear about demo reports is that the designers/publishers would like more player-level feedback. We’d like to remind all of our members that some time ago, we developed a custom survey editor that’s available to all Pro members. It can be found from the Studio Dashboard. You can make a survey for each game and expansion, with adding questions as simple as drag-and-drop. The surveys get completed by players on their mobile devices and generally take no more than a minute to complete. The surveys are available on our website to view by publishers, and will be emailed to you in real time.

Minions are reminded to encourage players to complete surveys at every demo. Games which don’t have a custom survey attached have a simple, basic default survey, so you never have to worry about whether a survey exists. If you want quick access to the survey, you can generate QR codes which embed your user ID, and even the game ID if you want to print one for each of your games, from both the Minion and Studio dashboards. Once a month, we randomly draw  one survey, and the Minion that collected it receives 20 bonus Loot Points!

Introducing IGA Checkups

Hey there, Team IGA! I just finished implementing a feature I’ve long wanted to add to our arsenal, which I am hoping will really help you take advantage of IGA’s myriad of benefits. I can’t tell you how many times I hear members tell me, “I didn’t know IGA offered that!” We’re hoping that our new Checkups feature will alleviate that.

Once a month, you’ll receive an email that goes through all of your profile settings, games and inventory stock, and makes intelligent suggestions based on patterns we detect for demo reports, sales, and so forth. It’ll identify areas where we can work together to improve things, IGA services you could be benefiting from, and incomplete data on your studio profile or game catalog. It’ll help you stay on top of bringing new games into our system and supporting old ones, too.

Want to check your progress? You can also view your checkup in real-time at https://www.indiegamealliance.com/account/studio/checkup. Don’t want to receive the emails? You can opt out using our new-and-improved email alert settings page at https://www.indiegamealliance.com/account/studio/alerts.

These emails will go out on the 3rd of every month, but since it’s after that date now, the December ones are sending as I type this.

Any questions? Just reply to the checkup email; it’ll go right to your designated account rep. We’re really hoping that this system will help bring you timely information about the IGA services that can be of the most help for whatever you’re working on right now.

As always, we welcome member feedback about this new service in our IGA Developers’ Lounge on Facebook.

Newsletters Moving to Wednesdays

Just a quick word here – we’re moving the newsletters to Wednesday mornings instead of Mondays. Sorry for sending two this week; that won’t happen going forward.

Our reasoning for this has to do with supporting member Kickstarters. Statistically, most Kickstarter campaigns for tabletop games launch on a Tuesday; indeed, we had six member campaigns launch yesterday. For any campaign that launches on a Tuesday and runs for 30 days (the default Kickstarter campaign length), it will end on a Thursday. Therefore, having the newsletter run on Wednesday will allow our members’ Kickstarters to be shared with our audience during both the campaign’s first 48 hours and its last 48 hours, both of which are critical windows. As it stands now, with a Monday newsletter and a Tuesday launch, we don’t get the word out until day 6 of the campaign, when the mid-campaign lull is already beginning.

Again, we apologize for the double newsletter this week, but we’re confident that this change will be worth the minor inconvenience. Thank you all for your patience, and good luck to all of our members whose campaigns are presently live!

IGA Policy Update: Store Refresh Periods

Greetings, members and Minions. We wanted to let you know about a change we’ve made to the way our IGA Reward Store gets new items, based on feedback we’ve had from Minions.

As member publishers sometimes give us very limited quantities of games, it can be something of a race for Minions to hit the store with a pile of Loot Points when a new game is made available. This limits options for Minions who aren’t online every day, and also leads to a sense of buyer’s remorse when a Minion clears out their Loot Points on a game, and the next day a game they wanted more appears on the store and sells out immediately.

To help alleviate this, we’re moving to a store refresh schedule, similar to what BoardGameGeek does on its promo store. We’ll be adding the new items to the store all at once, on the 5th of every month, so Minions can plan to browse the store then to see all the new stuff that’s come in for the previous month and make their selections.

Members will still be able to see the items in their My Inventory Items dashboard, so inventory control and transparency is not compromised in any way as a result of this change.

Please direct any questions to Victoria Hardman, who runs our store, at warehouse@indiegamealliance.com. Thank you.

Push Your Game to the Forefront with New IGA Boosts

Introducing a new feature to help your games get to the front of the pack: IGA Boosts! Boosts are a great way to help promote a game even harder during peak periods, such as a Kickstarter launch, new expansion release, or during a PR blitz following an award win.

For a nominal fee, members can double the Loot Points earned by IGA Minions when they demo your game for a 30-day period beginning with a date you choose. Minions will receive emailed notifications in the IGA weekly newsletter of current and upcoming Boosts, and may also view current and upcoming Boosts on our website.

For more details about the process or to schedule a Boost, click the Schedule a Boost button at the top of any game profile in your Studio Game Catalog(note: You may only Boost a standalone game, not an expansion — to promote an expansion, Boost the base game instead.) 

Minions, you don’t have to do anything extra to claim your Boosted Loot Points – just file a demo report for the Boosted game during the Boost period.

IGA’s Newest Integrations: The Game Crafter & IndieGoGo

We’re pleased to announce new integration points with two of the powerhouses of the indie game publishing world: IndieGoGo and The Game Crafter!

IGA’s support of IndieGoGo means that members using IGG as their crowdfunding platform of choice can now count on the same reliable backer/status updates, campaign shares, and all the other features Kickstarter users have come to love. In the past, we did the best we could to manually emulate what our automated processes do, but our new API access and integration means IndieGoGo campaigns are first-class citizens on the Indie Game Alliance website.

Our Game Crafter implementation is small for now, but will be expanding over time. You can now import game details from TGC just as you would from BoardGameGeek, using the Studio Game Catalog found at https://www.indiegamealliance.com/account/studio/games.  Simply paste in your game’s URL and click Load Data.

 

Announcing Minion Certifications!

We’re stoked to announce an often-requested new feature for member publishers: certification programs! IGA Pro members can now link a rulebook and/or a how-to-play video to any game in their Game Catalog and create a custom certification exam for it.

The exam editor interface is very similar to our easy-to-use survey module, so it should feel immediately familiar to IGA Pro members.

You can specify whether a certification is optional or required for demos – although we strongly recommend that members not set certification as mandatory except in situations where there are frequently tournaments with high-value prizes. This is, however, a simple and intuitive way for you to communicate the basics of gameplay to Minions, and for Minions to show off their expertise. Our Minions already do a fantastic job of quickly picking up and mastering game rules!

Minions can complete the courses online in their own time – there’s no need to schedule a session with a trainer. Simply watch a video – in most cases, the same how-to-play video from the Kickstarter campaign – and answer a few simple questions.

You’ll get a cool little certificate you can print out, and the publisher will get an email letting them know someone else has mastered their game. Your certifications will appear on your public Minion profile — and there might even be some new achievements associated with certifications.

Rerranging the Furniture…

As you may have noticed, we’ve reorganized the site a fair amount in the last few weeks. Most of the pages and menus look just as they did, but many of the page addresses (URLs) have changed. We’ve been doing this in stages to try and minimize any potential fallout from moving all the pages around at once. So far, we’ve seen extremely minimal ramifications, as we’re being very careful and deliberate in what we’re doing. Of course, if you see anything that isn’t working quite right, please don’t hesitate to let us know.

Breaking the site up into modules as we have will make for more readable links and make life for our web development team much easier going forward. We’ve also been using this time to clean up and update older, legacy code, so everything should be running just a little bit faster and better. We know this isn’t terribly exciting for our members compared to new features (trust me, those are coming) but our code team is positively giddy to have this work done, and everything we do from now on will get done faster, easier and better as a result of these changes. Of course, website work will likely be on the back burner a little bit through the summer due to convention season.

Wherever possible, we’ve added redirects from the old page addresses to the new. We’ll leave these up for a month or two, until we see people stop hitting them. We strongly encourage you to update any bookmarks / favorites you may have in your browser, and check any links you may have added to your Web site, Facebook profile, or other digital platform. If you’ve printed any changed URL in any game material, contact us about leaving a permanent redirect in place so the links don’t break.

Thank you for your patience as we make this change. There’s no great time for upheaval and reorganization, but it’s almost always better once you get through it. Thank you also to those Minions who have been helping us test the changes and identifying problems.

IGA’s Store is Live!

After literally years of design and development, IGA’s game store is finally live!

We’ll be using this store to supply our Minions with games and swag for their demos, as well as to launch a new micro-distribution service to help get member games into friendly local game stores around the world. The new software also delivers an incredible amount of transparency to our process; members can now see what we have and where it’s going in pretty much real-time.

We notified the Minions of the store’s launch on April 8, and we’ve already processed more than 25 orders. Minions, if you haven’t gotten out to run some demos in a while, now would be a fantastic time to do so!

Members, we’re sure you have questions. No worries! We’ve put together an interactive guide to help you get a handle on everything having to do with our new store software, stock policies, and the tools available at your disposal. We strongly recommend that all member publishers check this out, especially if we already have stock on hand from you.

Please direct any questions or concerns to our warehouse team at warehouse@indiegamealliance.com. Thank you so much to Victoria, Sherri, Jason, and all the other Minions who helped out with testing and development.

 

We’re Back! What Happened, and What We’re Doing About It

Greetings. I’d like to take a moment to talk about the complete service outage we experienced on April 4, 2017, what happened as best I understand it, and what we’re going to learn from the experience and do better in the future.

First off, the most important point:  we’re back up and running, and all checks I’ve been able to do since the server came back up have indicated we lost no data. I’m in the process of catching up on emails and such we didn’t get during the outage now.

On April 3, I was doing some development on the server and noticed it was running awfully slow. I should have said something to support, but I figured my code wasn’t very well optimized yet, it was late and I was tired, and I left it alone. When I went to bed on April 4 at 5:15AM, the server was slow, but functional. When I woke up at 9:00AM, it was completely down. I reached out to the support team at our hosting provider immediately.

IGA runs – currently – on one virtual private server, which is basically a private pie-slice of a massive server’s resources. Everything’s supposed to be redundant and backed up and magical. For those who aren’t up on datacenters and stuff, commercial-grade servers don’t just have one hard drive like your desktop computer does; they generally have 4 or 6 drives at minimum, and they can be configured such that multiple drives have a copy of the data, because you don’t wanna lose stuff. The idea is that, because drives fail, you can lose a drive or two and still be OK, you just have to replace that drive and carry on; the array just heals, as long as you don’t lose a bunch of drives all at once. We’re actually on a SAN, which is a giant disk array with many disks, shared by multiple servers.

SANs are managed by a special controller card that is in charge of reading and writing the data. Turns out, our SAN’s controller card was dying, and as it did, it was effectively ruining swaths of the drives. The slowdowns I was experiencing on April 3 were probably the SAN trying to find a usable copy of the data on other drives after the first went down. At 7:32AM on April 4, the datacenter team realized what was happening and pulled the plug. This took our server offline, but also protected our data.

The datacenter team then replaced the bad controller card and built a whole new SAN for us and the other sixty or so virtual servers that were using it. Now comes the real challenge: putting the data back. Since there was no complete copy on any one set of disks on the old SAN, because of the corruption, the datacenter team had to write a custom script to basically scour all the disks and reassemble the data from the still-usable bits. Between writing the script and then the very slooooow process of pulling the data back given that they were using the SAN in a way it wasn’t intended to be used, and because we’re talking about terabytes of data across all the affected customers, this took a while. We first got indications that indiegamealliance.com was back online at approximately 1:18AM on April 5.

Most companies would have just turned the server back on with missing data and tell people, “if you didn’t have a backup, tough luck,” but the crew at HostDime went above and beyond as they always do. I’m pleased to report that all checks I’ve been able to do since the server came back up have indicated we lost exactly no data. As an aside: HostDime is a fantastic company to host with specifically for reasons like this – and IGA Pro members save 10% when they use our referral code.

While we dodged a bullet here, there was a period of time where the HostDime staff didn’t know if they could restore our data, and that left me spending the morning assembling our backups and seeing what all we had and didn’t have on recent copies. All in all, I could have restored about 96% of our data, which isn’t catastrophic, but that’s not good enough for me. I did a full audit of our backup situation, and here’s what I found.

IGA’s collected information footprint is big: about 12 gigabytes worth. Once more people start using the new print and play hosting feature, that will reach hundreds of gigabytes in very short order. It’s impractical to just download it every day to have a backup. I was signed up for a “remote backup” service with the hosting provider, but it only has a certain amount of backup space. The problem is that apparently, there’s a setting you have to turn on that tells the backup process, “delete old backups when you run out of room, to make room for new ones.” To my shock and dismay, that was not turned on, so the newest backup we had there was from November 30, 2016. I had received email warnings about them, but they were cryptic Linux system logs and I didn’t understand them well enough to decipher what was wrong and fix them. I am a coder, not a systems administrator, and the emails generally say stuff like “A system event has occurred” as opposed to “Your backups are broken! Fix them now!” So, that backup would have been good good for some of the older, legacy stuff, and a few other non-IGA static domains I host, but not terribly helpful for IGA as a whole. This was the primary line of defense, and it was basically useless in this case.

This vulnerability has now been fixed, with the aid of HostDime staff. The pruning issue has been fixed, making sure it’ll never fail for disk space. We also found an error in one of our testing databases that was giving the backup process some fits, so we took care of that as a precaution as well. I’ll be downloading the generated backups on a regular basis so that we have extra copies, even after the old ones are deleted from the online storage.

Without a comprehensive backup of the site, it was time to hit application-level backups, which really should be the last line of defense.

I work on IGA’s website code out of a Google Drive share, which means I have the current code stored in Google’s cloud. Because I use the Google Drive desktop app, I also have a fully-synchronized copy of my Google Drive on every machine I use, which is two laptops, my desktop workstation and my phone. Additionally, I check in code to GitLab.com after every major revision, so that’s yet another, albeit slightly outdated at the moment, copy. So, the IGA code was fine, and we would have suffered no data loss on that front.

Most IGA staff members use GMail as our mail client, for its handy Android integration, but it has another super bonus feature – it stores a copy of all our email when we check it. So other than emails that have been received during the outage itself, we appear to have lost nothing on that front. Hooray, clouds-talking-to-clouds!

A few weeks ago, I read about a massive database failure at a large company, made worse by lack of backups, and I had a bit of a panic attack and started implementing a plan to improve the IGA backup structure, which was not at all as robust as I’d have liked it to be (as you’ll see.) It’s a very, very good thing I did. Focusing on the most important stuff first, I wrote a script that executes every 4 hours that effectively takes the main IGA database, compresses it to a file, and sends it to me securely offsite. So, I had a snapshot from 4:20AM. The 8:20AM snapshot never came, so we knew the shutdown occurred between 5:15AM and then. (It was actually 7:32.) Fortunately, not a lot of data moves around during those hours, so very little if anything would have been lost. I have accelerated the frequency of this backup to hourly as an extra precaution.

We didn’t have one of our supporting databases – the WordPress installation that runs our little news blog – included in our backup. We would have been able to recover most of the posts through other means, and it wouldn’t have been a huge deal to lose, but adding that database to the backup regimen we’re already backing up was trivial and was omitted purely as an easily-corrected oversight on my part. This has now been done as well, with a frequency of twelve hours (as it’s very rare we make more than one post in a day, this should be plenty, and we don’t want to fill up the backup drives with endless identical copies.) WordPress has a built-in backup option, but it costs money to use and WordPress is such a small portion of what we do that I’ve never activated it.

So that’s code, email and databases secured, which are the biggest things. Now, on to data files. This one’s a toughie. We store image files that are linked to games, demo report pictures, and user avatars (publisher logos, mostly) and very recently started accepting uploads for print and play files. Because I don’t use them in the development side, they aren’t included in the backups of the code, and because they aren’t stored directly in the database, they aren’t in the database backups either. We were counting on the backups from the remote FTP (the ones that hadn’t successfully run since November 30) to get those, and we didn’t have a backup for the backup yet (more on this below.)

The system administration team at HostDime had a full-system backup of the SAN from February 4 of this year, so no files our users uploaded to the site before then would have been missing. Of the remaining ones, I would have been able to write a script to re-download the missing game and company logo files from BoardGameGeek (which is where we got most of them in the first place), but demo report pictures, PNPs and such uploaded post-February 4 would have been lost. This appears to have been the most damaging potential impact of this event.

Immediately after the server came up, I took a snapshot of the entire images directory, PNP cache and other user-uploaded content to make sure we had a current copy. That, like my code, is now on Google Drive for safe-keeping. We don’t yet have an automated solution for keeping this data backed up, but I’ll do it manually every couple of days until we do.

So, where do we go from here?

At present, we feel pretty good about the backup strategy for the code itself; as I said, it exists in real-time on four devices I control, plus in two different clouds (GitHub and Google Drive), plus the production server itself. That said, the “full server” backup strategy doesn’t differentiate between a code file and an image file, so all the improvements we make to the full server strategy will provide even more security for the code.

The database backup seems to have been pretty effective, but I don’t like how much I’ve held my breath today wondering if it was there, current, and would restore properly, so we’re going to do some more about that as well. First off, as I mentioned, we’ve increased the frequency to hourly and added in the support databases like the WordPress stuff. As with the code, it’ll also get swept up in any full-server backup strategy we employ.

Before this incident, as part of my panic mode a few weeks ago, I provisioned a second server intended to serve as a live backup for the database. Every time someone alters the production database, it would also send that change to the backup server such that it remains current to within a few seconds. In database lingo, we call this replication. Problem is, I didn’t have it fully set up yet, so it wouldn’t have helped us in this event. I had it scheduled for next week.

Once the replicated server is in place, the hourly snapshots will become a tertiary backup to the full-system snapshots HostDime takes for us and the replicated server as the second line of defense. The snapshots will also provide extra options for restore if something goes wrong with the replication, and help protect us from “bad query” results – stuff like accidentally deleting all the games instead of the one I meant to because I typed the wrong query. (Replication doesn’t generally help here, because the replication server will happily execute the bad query you just wrote as well if you don’t catch it fast enough.)

I’m going to look into the feasibility of having the code and image files automatically synchronized with the replication server as well. In theory, we could then promote the replication server to be the new “production” environment in the event of an outage if something like this were to happen again – the service would be much slower (because I can’t afford redundant high-powered equipment) but we’d at least be limping along until the primary came back up.

We’re going to take a two-tiered approach to the full-server backup. First off, we’ve turned on the auto-pruning feature so that the backup process we’re already paying for will start working again and continue to work even after the disk fills up. I’ll be writing a script to automatically download those snapshots, so I can archive them myself locally. Copies I can physically lay my hands on at need give me the most warm and fuzzies. This will provide yet another copy, and permit me to have older snapshots on hand even after the online backup has been purged. This will also give me more visibility in the form of big blaring alarm emails if something goes south with the backup strategy. This will get enormous, but if I have to throw the old ones away after a year or keep a bucket of USB drives around once they get super old, it’s a small price to pay.

I’ll also be employing an “always ask support if I get a nasty-gram from cPanel” approach, so that obtuse errors in log files I struggle to understand will no longer lead to real-world consequences.

Our hosting provider is spinning up a cloud hosting platform with much greater data redundancy (because it’s on a gigantic SAN). Having a server there, if it doesn’t get much traffic, is just a few bucks a month, and at that price, why not? I’ll be looking into using this to set up a second fully redundant server there as soon as that platform is available (HostDime says it’ll be a few weeks.)

In summary, once this all is done, our backup strategy will look like this:

  1. Fully-replicated servers that can be promoted in the event of an outage. Data-complete to within a few seconds of real-time.
    1. Already-provisioned smaller VPS (coming soon)
    2. Cloud virtual server (coming soon-ish?)
  2. Weekly/monthly HostDime-provided backups
    1. Will be FTPed to a location HostDime can access for rapid restores
    2. Will be downloaded and archived locally at IGA HQ
    3. It’s entirely possible that HostDime will also have backups of the backup server(s) to pull from in an emergency as well.
  3. Application-level backups
    1. Code:  4 local machines, Google Drive, GitHub [Frequency: real-time]
    2. Email: GMail [Frequency: within 1-2 minutes of real-time]
    3. Databases: Snapshots on Google Drive [Frequency: hourly for primary DB, twice a day for secondary DBs]
    4. Data files: None. We’re relying on one of the methods in Tiers 1-2. Have any other ideas for something cost-effective? I’m all ears. We’re considering third-party solutions like BackBlaze here.

I’m confident that these strategy improvements will be affordable and relatively simple to set up with help, and will make sure we never experience significant data loss again. And the takeaway for us all: If you don’t have at least three tested backups of something, assume you don’t have any.

We apologize profusely for the inconvenience of all of this, and give you our word that we will use this experience it as an opportunity to learn and improve. We’d also like to thank the staff at HostDime, specifically Kevin, Pat, Aric and Joe, for working more than 18 hours straight to save our bacon.

Update 1: April 5, 2017 6:00PM Eastern

We’re moving fast on making these improvements. Today’s accomplishments so far include:

  • Adding the WordPress database to the list of application-level backups
  • Increasing frequency of the primary database backup from every 4 hours to every hour
  • Correcting the prune operation and adjusting all settings on the primary server backups to be more efficient and effective
  • Cleaned out the old backups to make space
  • Got a bit of an education on how the backup automation process works
  • Increased full-system backup frequency from weekly to daily
  • A few days at a time worth of full-system backups will now be archived on the server itself as well as on a remote archive within HostDime
  • An initial copy of all 4GB worth of user-uploaded PDF and image files was conducted early this morning so we have at least one temporary backup
  • A full system backup test was conducted and successfully performed

I’ll be driving up to the local Best Buy in about 20 minutes and picking up a 3TB external USB drive, which I’ll then be automating downloads of the full-system snapshots to nightly for extra-extra-extra protection and for the ability to retain longer-term archives of data without needing to pay for expensive server space. Once that’s done, our passive backup strategy will be in place, and we can start working on the active (redundant server) setup. That will take a little longer than just copying files, but we’ll let you know as we’ve gotten something up and running.

Thank you to all for your patience and encouragement as we’ve gone through this important, painful step toward being a more “big-league” operation. 😉

 

Update 2: April 6, 2017 10:07AM

At this time, a new external hard drive has been installed at IGA HQ, and a script running on it is now automatically pulling the nightly full system backups down off the FTP server for long-term archival and retention, ensuring we’ll be able to keep full daily snapshots for nearly a year, and then archive those down to maybe one snapshot a month once they’re than 6 months old or some such.

This is the last piece of our disaster recovery backup strategy, barring any improvements suggested by the community. We’re now 100% confident that we could come back pretty much unscathed from losing the server. The next step is active recovery, which is our real-time redundant servers. Those are going to take a little longer to set up and configure, so we’ll be wrapping up a few projects we had started before this occurred, and then focusing our development efforts on that.