It feels really strange to admit this, but for years now, music has not played a very significant role in my life. These posts are now all hidden, but if you’re a longtime reader of this blog, you may remember the many times that I’ve written about all of the ways that music induces emotions from me – elation at the news of new album releases, being in awe of amazing lyrics, grief when bands broke up or went on a hiatus, being bummed out that I couldn’t make it to concerts or that artists just didn’t go to Louisiana – this is just a small part how music has affected me in the past.
I still listen to music on occasions, but not in the same way. Sometimes I get a song stuck in my head, so I listen to it a few times, and that’s that. Most reliably, I’ll listen to music while I’m writing, but it’s a very specific album that I’ve probably listened to hundreds of times that helps me concentrate (R/D’s “Liquid Heart Keeper“).
Really, the biggest impact music still has on my life is that every once in a while when I’m feeling nostalgic about something, I dig up an old song and relive that moment that ties me to the song. That was really the inspiration for me writing this post out – “Eden” by The Mayfield Four randomly popped into my head, and I instantly had flashbacks of hanging out in that weird little atrium in the geology building at LSU. This, in turn, made me remember trying to read “Neuromancer” for the first time in that same room, and also, perhaps more importantly, brought back fond memories of writing garbage romantic flash fiction in the hall outside of one of my geology classrooms while waiting for the current class to leave so I could go in.
Another really strange feeling I’ve experienced before from music is a bizarre sense of nostalgia while listening to songs about things I’ve literally never lived out. I suppose you could say those songs were powerful enough to transport me somewhere else and give me that brief sensation of living vicariously.
But nowadays, I just can’t seem to get into any new music. It just feels like that part of me is gone, replaced by podcast after podcast after podcast. And maybe that’s a good thing too; I certainly enjoy my podcasts, but sometimes I wonder if I’ll ever be able to feel the same way about music again.
In my last post, I briefly mentioned that I’ve been playing Fortnite a lot, which if you know me, you may find a little weird considering I normally don’t play competitive shooters. One player shooters, yes. Multiplayer online games, yes. A combination of the two…not so much. In that same vein, I wanted to explain why Fortnite is so great and why you may want to consider playing it if you’re not already.
Image courtesy of Epic Games
First, what is Fortnite? Well, to explain that, here’s a quick history lesson.
Just over a year ago, Playerunknown’s Battlegrounds (PUBG) was released and received extremely positively. PUBG certainly wasn’t the first game of its kind, but it undeniably popularized the battle royale style of game. That is, 100 players on a map, one team wins (or if you’re playing solo mode, then one person wins). What’s more, PUBG popularized how these types of games work, which was essentially you start off in a vehicle in the sky and parachute down to the map (which is an island in basically all of the battle royale games I’ve seen). Once you land, you have to quickly pick up a weapon and other loot to survive and take down other players. There are more mechanics depending on the game, but that’s the gist of it.
When PUBG came out, Epic was busy developing what is currently known as Fortnite: Save the World, which is a player-versus-enemy co-op game where you fight monsters with weapons and the aid of materials you can use to build forts (you can craft individual walls, stairs, ceilings, floors, etc). Epic saw the attention PUBG was getting and realized they had all of the assets available to essentially clone PUBG, and they did exactly that. Two months later they released Fortnite Battle Royale, which from this point on, I’ll just refer to as Fortnite since, funny enough, even though Save the World was the original incarnation of the game, Battle Royale is what anyone who mentions Fortnite is actually talking about.
Image courtesy of Venture Beat
That said, Fortnite is currently one of the most popular games in the world. Last week on Twitch, I saw 700,000 people streaming Fortnite. Next most popular was League of Legends at 100,000, then PUBG was in third at somewhere over 70,000. So if Fortnite is essentially a clone of PUBG, how did it essentially stomp the game it copied into the ground?
First and foremost, Fortnite is free. Save the World isn’t currently free, but when it exits its early access period, it will be free. PUBG is not free, and I feel like that alone bears significant weight, especially among younger audiences with less disposable income. So yeah, that’s reason number one, and it’s a big one.
I’ve only played PUBG once, and it was on mobile, but I still feel weirdly qualified to talk about it because I’ve watched Polygon’s video team stream that game every week for 1-2 hours for just over a year. The one time I did play it, there was no learning curve since I already knew practically everything about the game. Since Fortnite is nearly identical to PUBG, I already knew how to play Fortnite except for the crafting stuff, which was easy enough to learn the basics of. Also, I pay a decent amount of attention to video game news in general, so the world of PUBG news isn’t exactly foreign.
First off, Epic makes a lot of interesting and thoughtful changes to Fortnite. When they add something to the game that players don’t like (for example, an overpowered gun), they actually monitor this feedback and make adjustments accordingly. Epic employees regularly post in /r/FortniteBR confirming bugs, providing comments on community feedback, and a host of other things. I’m not sure how Bluehole (PUBG’s developers) handle that kind of stuff, but based on feedback I’ve heard, I’m guessing not super well.
Fortnite also runs really well on the platforms I’ve played it on (PC and iOS), and it’s available on almost every major platform and console, with Android support coming soon and also being the final missing piece. Now, I can’t play on my 12″ MacBook or my Surface Pro 3, but it doesn’t take a beefy machine to run Fortnite with playable graphics. The mobile client is also surprisingly good considering how hard it is to play that type of game on a touchscreen. PUBG’s iOS client is actually pretty good too, but the details that Epic put into the Fortnite mobile client to make it playable versus being on a computer or a console are really thoughtful. There’s an auto-shoot option and a visual alert that notifies you when there is shooting, a chest, or footprints nearby – all aids to things that are made more difficult on a phone or tablet.
Another thing Fortnite does really well is monetization. Yes, you can play it 100% free and experience absolutely no disadvantage in gameplay compared to someone that’s spent $1,000 on cosmetic items. Fortnite allows you to purchase V Bucks, and V Bucks are used to buy cosmetic items, emotes (usually various dances), and of course, the Battle Pass.
Season 5 battle pass, image courtesy of Forbes
Epic really knocked it out of the park with the Battle Pass, because unlike PUBG’s monetization system (loot boxes, which are essentially gambling), you always know exactly what you get with the Battle Pass, and you get it by playing the game and completing quests. Nothing is random, period. If you’re a better player, you get more experience, which means you level up faster, which means you reach higher tiers of the Battle Pass and receive the items associated with it.
Purchasing a Battle Pass costs 950 V Bucks per season, which is equivalent to $10 with 50 V Bucks leftover, as you can buy 1,000 V Bucks for $10. Each Battle Pass lasts for one season, and one season lasts for 10 weeks. But wait, this doesn’t necessarily mean you need to pay $10 every 10 weeks. If you just play the game, some tiers in the Battle Pass reward 100 V Bucks, so essentially, if you earn and save 950 V Bucks worth of your rewards every season, you only need to spend $10 one time, then play the game, and you’ve got a perpetual Battle Pass for as long as you keep earning enough rewards. And yes, it is totally doable to do just that, but you might be enticed to buy emotes or costumes.
Oh, and the most important thing about the Battle Pass? It actually makes the game, which is already fun to play for free, more fun by providing additional Battle Pass-only quests and secrets (I only just learned about secret Battle Stars yesterday!).
Unlike PUBG, Fortnite only has one map, but it gets updated and changed every season. My last blog post was called “RIP Moisty Mire” because Epic removed the swamp that took up the entire southeast portion of the map and replaced it with a desert for season 5. In season 4, a meteorite struck the location in the center of the map (Dusty Depot) and turned it into Dusty Divot. And it’s not just simple stuff like that; the seasons are themed and bring along interesting changes. At the end of season 3, players knew something on the map was going to get wiped out because you could see the meteorite in the sky, so there was a lot of speculation on what would happen. When the meteorite did hit in season 4, there was a new consumable item called “hop rocks” (essentially meteorite fragments) that allowed players to jump higher. At the end of season 4, a rocket that had been on the map for a while took off and smashed into the sky, cracking it like glass and introducing “drifts.” Random stuff started appearing and disappearing on the map, and like I said, an entire section of the map disappeared and got replaced. It’s just a really cool method of storytelling for a game mode that really doesn’t even need a story (but I’m absolutely glad that it sort of has one).
If I had to point out a weak point of Fortnite, it’s that there are a lot of people that play, and the game is immensely popular among audiences ranging from teens to adults. That means you’ll probably have teenagers on your team, and if you have voice chat on, you will hate your life. I keep it turned off, and I’m thankful Epic gives me the option to do so. It also sucks when you’re playing squad mode (teams of 4), and some random jackass refuses to land with the rest of the team. It puts everyone, including the solo person, at a disadvantage compared to a team that sticks together and lands in the same area. You can play solo or duo mode (or only play with friends you trust to not be jerks) to avoid this, but it’s a part of squad mode life if you’re playing with randos.
Finally, I just want to say that Fortnite gameplay is really fun, even if it’s fundamentally frustrating. You’ll probably die most of the time wherever you land, you’ll probably win very infrequently (only one person/team can win out of 100 people, after all), and you’ll probably get sniped out of nowhere right after you pick up the best close-range gun in the game. That’s a part of competitive shooters, and yet, that challenge is what makes it fun. I personally consider it a win if I at least take someone else out before dying. Of course, if you don’t like shooters, you probably won’t like Fortnite, but I’d also point out that I typically don’t like competitive shooters or third person shooters, yet here I am, telling you that the competitive, third person shooter called Fortnite is a blast.
My attention has been divided between this blog and a couple others (the bottom two on the side bar over there –>), but if you’ve been paying attention to those, you’ll probably notice that my activity on those has waned as well. Most of my writing focus lately has been going toward the sequel to Iterate, which is actually coming along pretty well, but I’ve been sorely lacking in posting personal/life updates here, so I guess it’s time to do that.
I’ve been wanting a new laptop since the beginning of this year, and for the first time, it’s not because there’s a laptop out that I want. Quite the opposite, it’s because I feel ready for a more powerful laptop than my 2015 12″ MacBook, but the problem is, there is literally not a single laptop on the market that I want.
I usually default to buying a MacBook when it comes to laptops because Windows laptops are kinda terrible, but I’m just not the biggest fan of Apple’s current laptops. Now that they’ve updated the Pro models with a Touchbar and didn’t bother with the non-Touchbar version, which is literally the only laptop I’m interested in from them, they’ve just totally lost me. But even before that update, I had given up and gone over to the Windows world for a laptop, and let me tell you, that market is a total and complete mess.
This is probably the wrong place for me to get into the details since I have a tech blog, but I’m gonna do it anyway. The issues boil down to a combination of some (or perhaps all) of the following:
Poor trackpads, aka, “Does this laptop have a glass surface with Windows Precision drivers?” This is a dealbreaker – don’t buy a Windows laptop that doesn’t have this if you plan on using the trackpad.
Poor customer support/lack of local support options/quick turnaround for issues
Build quality, including case flex (does the chassis give when you press down on it) and screen flex, which I was horrified to learn was an actual problem in the Windows laptop world (can you tell I haven’t purchased one in a while?)
Noise tests (how loud do the fans get?)
Poor quality speakers (no one comes close to Apple here)
I pity anyone shopping for a Windows laptop. I bought a Razer Blade 15 and returned it because it simply wasn’t worth the price tag for the heat/noise it generates. And heck, now I feel bad for the pros that went out and bought the new i9 MacBook Pro, because I guess those are throttling hard (7-25-18 update: Apple has apparently fixed this with a software update yesterday). But, at least if you want a Touchbar (or don’t mind paying the premium for one), you can buy a pretty decent 13″ or 15″ Core i7 model…so I guess that’s something.
Anyway, I gave up on that and instead just focused on my desktop. I mounted a monitor arm on the wall next to the sofa, so now I can easily use my desktop while relaxing. I also bought a GTX 1080, which I installed yesterday, so hopefully I’ll be set for another 3 years or so with that (my GTX 970 was just over 3 years old, and honestly would’ve still been fine had I not gotten into VR or wanted a 144Hz 1440p gaming monitor…).
I guess on that note, I’ve been playing a ton of Fortnite, so if you want to play together, hit me up on my mobile (that’s a little old person humor for you, the joke is that I’m old; social media is fine). Oh, if you don’t read my tech blog, I guess I should mention that VR is awesome, and I’ve been playing Beat Saber almost every day since I got my VR headset. It’s really cool, and the most fun I think is truly to be had with the games that are designed for VR rather than shoehorned to fit VR. Fair warning about it, though: I don’t have issues with nausea (the headsets are super fast and responsive these days) but some people still get motion sickness.
Anyway, here’s to hoping Apple releases a good laptop without a Touchbar that has at least a current generation Core i5 sometime in the next year so I can buy one. Sigh.
For some reason, I thought I’d posted here that Iterate would have a sequel at some point, but I suppose I confused platforms. Regardless, this post is to say that yes, Iterate will have a sequel, and I am currently writing it. As much as I’d love to release it on August 28th, 2018, that’s a pretty unrealistic goal right now. While subject to change, I’d say a more likely release will be Q4 2018.
I’ve always thought augmented reality (AR) was the future. I’ve mentioned it before on social media, I’ve said it on a podcast I used to co-host – AR is cooler and more important than VR.
However, that doesn’t mean that I didn’t see value in virtual reality. In fact, I’ve wanted an HTC Vive from the moment I heard about them, but I refused to pay the $799 price tag. Since then, I’ve maintained a half-watchful eye on the market, but I admit, I’ve been a bit curious every time I passed by the Microsoft Store and saw customers playing with the VR demo. The wires always turned me off, and I told myself “I’ll get one when they’re way cheaper or when they’re wireless.”
Then, a couple weeks ago. SwiftOnSecurity tweeted this:
$200 all-included Oculus/SteamVR compatible native Windows10 VR headset with 1440×1440 per-eye resolution WITH two FULL MOTION CONTROLLERS built by Lenovo half off. Not a referral link. Microsoft-subsidized gear shipped to your door. Come get y’all juice. https://t.co/wZEQoWvxF5
Needless to say, I was intrigued. Oculus Rift had come down to $399 and the HTC Vive to $499, but I still didn’t want to make that level of investment on a wired headset. $199, though? Take my money!
And indeed, they did, because I now have a Lenovo Explorer VR headset. Well, I guess it’s actually a “Windows Mixed Reality headset,” but I’m a little unclear why it’s branded as that, considering it and other Mixed Reality headsets are all VR rather than AR, the latter of which is what the term “Mixed Reality” implies. I suppose it could be Microsoft’s way of hyping “holograms” and all of the tech they’ve prepared for the Hololens without actually having a consumer-ready version of that product available for purchase. If that is the case, it was wasted on me, because I’ve been ready for the Hololens long before I got this VR hardware.
Windows Mixed Reality Headsets are compatible with Mixed Reality games in the Microsoft store, as well as Oculus and Vive games using Steam VR (you just have to download the Windows Mixed Reality program in Steam to get it to work). Outside of gaming, Microsoft lets you interact with Universal Windows Apps inside of your own virtual reality house. It’s honestly pretty cool, despite how incredibly useless it is. But again, the novelty is still quite incredible. It was the first thing I saw in VR, and my first reaction – and I imagine most people’s is as well – was just “whoa.”
Virtual reality has a lot of moments like that, not just when you first put on the headset. The first time I “telepathically” controlled something, I got such a huge grin on my face. The first time I shot a gun in VR, I couldn’t believe how incredible the tracking was. The greatest thing about this headset, or any other one, is that once you try VR, the rest of the headset will sell itself to you.
There’s a lot of weird stuff with Windows MR, and I’m sure there are bits that may or may not apply to other VR headsets, but look, I’m just going to say this plainly and simply: virtual reality is incredible. You’re going to keep reading this article and think, “wow, there’s a lot of weird stipulations and issues. Is this even worth it?” So just imagine it this way – after every negative thing or issue I mention in the rest of this article, imagine the sentence is followed with, “but VR is awesome, so you won’t care.”
I know, I know, that sounds like a wild assertion, but consider this: if you lived in an alternative universe where all smartphones had a one hour screen-on-time battery, took blurry pictures, and crashed about every 30 minutes, but they still gave you the whole app ecosystem and the ability to have the Internet anywhere, you’d probably still want a smartphone, right? Having a communication tool like a smartphone in your pocket is incredible. VR is the same, though not quite as life-changing.
So, that said, let’s dive a little deeper.
The hardware of the Lenovo Explorer has one major drawback, but it only affects some people. There is no hardware adjustment for pupillary distance, so if you’ve got wideset eyes, you’re out of luck. This headset will always be blurry for you in at least one eye, so you should definitely look into one with a hardware adjustable IPD. This was not an issue for me, so no worries there. It also doesn’t have a mic or speakers built in like many competing headsets do, but it’s got a headphone jack.
They only other negative thing I can say about the hardware is that the Lenovo Explorer’s (much less decently-priced) competitor, the Samsung Odyssey, has a slightly higher resolution. As far as I know, most Windows MR headsets are 1440×1440 per eye, but the Samsung device is 1440×1600. That’s not to say that the resolution on the Explorer is bad, but even having never used another VR headset, it’s clear that the resolution could be better (the lower resolution units create what people call the “screen door effect”). I imagine this would only take away from the experience for you if you were used to using a headset with a much higher pixel density, but I don’t think such a device exists yet. In the meantime, it’s such an immersive experience that I really stop noticing after a bit anyway.
Microsoft has a Mixed Reality PC Check app that you can download to make sure your computer meets the minimum requirements, but I’d also note that a lot of people, despite passing the check, have issues with built-in Bluetooth adapters and end up needing to buy a Bluetooth 4.0 dongle (I got this one for a whopping $13, it works great). The controllers eat batteries pretty quickly, so you’ll definitely want to buy rechargeable AA‘s and a charger. Mixed Reality also requires Windows 10 with at least the Fall Creator’s Update installed, but the April 2018 update is highly recommended.
The other downside as far as hardware requirements go is that they’re…well, steep. I built my gaming desktop three years ago using mostly next-to-best components, and my GTX 970 is literally the minimum requirement for most VR games. I can’t play Fallout 4 VR or some of the other big name games that were ported over to VR, but I’m actually not super upset about that…yet.
Oh, and speaking of Fallout 4, even if you own it, you have to buy the whole game again to get the VR version, and this is true for most VR games I’ve found. I understand that a lot of additional work goes into porting these games, but I own Fallout 4 and a season pass, and I feel a little cheated that I have to shell out $60 if I ever upgrade my computer and want to play it in VR. I’d be fine giving them an extra $15, but come on, I’ve given you ~$75 already.
The software component is actually the biggest downside of Windows MR and the Lenovo Explorer so far. It’s completely worth the hassle, but it is a hassle at times. Part of it is that this is all new, and while it’s getting better, there are bugs, and the other part is the learning curve that comes with a new technology. Some people have issues with the controllers connecting, some people have issues with SteamVR crashing, some people have issues with the boundary being lost – all of which are solvable, but frustrating things that I experienced.
Because most people seem to need to buy adapters, you can’t forget to disable your built in adapter or it will stop your new adapter from working properly. Sometimes SteamVR just crashes and you have to restart SteamVR, Steam, the MR Portal app, or your whole computer. Oh, and the boundary? That’s the thing that you trace in reality to tell you in VR where you can move in your room without bumping into stuff. It’s really neat, but it requires a well-lit room and a floor with a distinguishable pattern, because besides the built in accelerators and whatnot, it also uses the front-facing cameras to determine your location.
It took me about 20 minutes of wracking my brains and googling to figure out why I was getting the “boundary lost” message (I had to turn on the lights), and though it seems obvious in hindsight, it is such a typical Microsoft error message, lacking the most basic instructions to fix it. I’m sure it’s in the manual, but seriously, who gets a VR headset and sits down for an hour to read a boring book about it?
Within VR environments the biggest issue is locomotion. It’s something that has yet to be solved in a great way that doesn’t also cause a large amount of users to get motion sickness, so as a result, most games use some kind of teleportation mechanic. This is a very non-immersive solution, which sucks, but the other options are 1) make people sick or 2) don’t make games that require that kind of movement.
Option 1 has resulted in games like Pavlov VR, and option 2 has resulted in lots of “wave shooters” and games like Beat Saber. If you’re unfamiliar, Pavlov VR is an online FPS that some Steam reviews call “Counterstrike for VR,” and it employs a locomotion technique where you basically just put you finger on the left touchpad to move around. I tried it, and while at times it’s a bit disorienting, it didn’t make me sick, and I actually sort of liked it (I returned the game though, as I was stupidly hoping for more offline content, of which the game has almost none).
Wave Shooters are games like Superhot VR and Raw Data, where you stand in one spot and shoot enemies as they approach you. You can move around within the space you’re standing, but to, say, travel down a hall, you either teleport there, or your character is “on rails” and moves there automatically when you’re done with the current area.
Beat Saber is a rhythm game that takes advantage of limited physical movement, so while you don’t have to travel down a hall or anything, you do occasionally have to dodge or duck under obstacles that approach you. This type of game works very well in VR, as do wave shooters, and while Pavlov VR was novel, I feel like maybe people are only playing it because it’s a VR shooter game, not because it’s necessarily a great game in general. On the other hand, Beat Saber and Superhot VR aren’t good games with VR attached somehow – they’re good because they’re VR games. That is to say, the best games I’ve played in VR so far are the ones that do things that you can only do with VR rather than ones that are traditional-style games adapted to VR.
Anyway, it’s easy to overlook the issues with VR when you put on the headset. Even if it crashes one out of 10 times, or you have to unplug your cables and plug them back in 20% of the time, when it works, you just forget all of that. I honestly can’t remember the last time a video game has wowed me as much as Superhot VR did. Sure, I loved the last few video games I played (all Fire Emblem games), but it was a familiar, predictable experience. Yes, I am admitting there is a lot of novelty with VR, which is one of the things I hate about Nintendo’s hardware every time they release it. The Wii controllers were fun until they weren’t new anymore, then they became a detriment (in my opinion, at least).
The Wii succeeded in making games feel more immersive, but it only brought a part of that equation. Sales were great, but anecdotally I believe that once the newness wore off, a lot of people used Gamecube controllers for pretty much anything besides Wii Sports. In fact, if you look at the going price of The Legend of Zelda: Twilight Princess on eBay right now, the Gamecube version is going for about twice as much of the Wii version, the latter of which forced you to use the Wii controller. Of course, correlation does not imply causation, blah blah, but the Wii version of the game is technically more modern and displays in 16:9, whereas the Gamecube version only does 4:3. I don’t know why else the Gamecube version would be more popular other than the controller.
At the very least, I think this shows that for certain types of games, people don’t want gimmicks – they just want great gameplay. This leads to a pretty obvious question: is VR just a gimmick?
I think it’s a fair question to ask, considering one of the reasons PC gamers prefer PC gaming is because a keyboard and mouse gives you much greater control over a game than an Xbox controller, and that is something that VR takes away. While you can technically play some VR games with a controller on your PC, the experience is greatly diminished by not using the motion controllers.
If you ask me, a person who is as fallible as any other, I’d say that the immersive nature of VR sets it apart from a device like the Wii, whose gimmick was merely a controller. You could also say that multitouch displays were a gimmick when the iPhone came out, because at the time, “real work was done on devices with keyboards.” Clearly, touchscreens were no gimmick, and I think VR falls somewhere closer to touchscreens than the Wii remotes. Windows Mixed Reality applications are very much a gimmick, because there’s very little practical application for them, but VR gaming is the exact opposite of that.
VR is one of those things that you can’t do justice by talking about, seeing pictures, or even seeing video. It’s one of those things that you have to experience to understand. If you have the resources, I’d encourage you to try it out, and let me know what you think.
No matter where you stand on iOS versus Android or MacOS versus Windows or really Apple versus any other ecosystem, there is a universal truth that we can pretty much all get behind:
Techpinions recently posted an article on iPhone X customer satisfaction, and the graph details this truth pretty brutally – consumers are super happy with the iPhone X in every aspect except for Siri. This has prompted a lot of extra opining on the subject lately by tech journalists, so naturally I couldn’t resist jumping into the fray myself.
I’ve mentioned before that I don’t feel like tech journalists understand the plight of regular people, since most of them exist in a bubble in which they live and breathe any and all tech, which divorces them from reality at times. This is another of those cases, because over and over again, when I hear tech journalists complain about Siri, it’s always something along the lines of, “Siri sucks because it doesn’t have enough access to your data.” I have no idea where they are drawing this conclusion from, but it always drives the conversation to what Apple can give Siri access to in order to make it better, which is a terrible assertion and the wrong direction for the conversation to go in entirely.
Siri doesn’t need additional access to be better; Siri just needs to work, period. The only thing Siri is okay at right now is setting reminders/timers, sending text messages, and controlling HomeKit accessories. And while Siri is good at those things, even some simple commands throw it for a loop. As I understand it, commands given to Siri get send to one of multiple Siri processing servers at Apple, and then a result is returned. Sometimes, it feels like the fate of your command is dependent on which server it hits, because 9 out of 10 times, your command will process and work properly, but the 10th time, it won’t.
One wacky personal example of this was when my wife and I were headed on a weekend getaway to the mountains. I activated Siri and said, “Give me directions to Lake Lure, North Carolina,” and consistently, multiple times in a row, it returned directions to a completely unrelated town in a totally different state. There was no rhyme or reason to this. This town didn’t sound anything like my command, Siri just straight up failed to get anything about my request correct other than that I wanted directions to a place. Siri has navigated me to Lake Lure multiple times with no issue, but in this instance, it gave a completely bizarre response.
And it’s not just stupid failures of the server that make Siri dumb; she’s literally unable to do requests that feel totally obvious.
“Hey Siri, turn on the flashlight.”
“Sorry, I’m unable to do that.”
“Hey Siri, take a selfie.”
*opens the front facing camera, doesn’t take a picture*
These are both on-device commands which pose little to no privacy concerns.
Then, there’s one of the biggest complaints about Siri – general knowledge questions. I’m not talking personal data, I’m talking, “What is the name of those gates in Japan?” For this type of question, Siri will just do a Google search and display the top results. The correct result is at the top, but this is entirely not useful if I’m driving or if I ask a HomePod or if I’m across the room. Google Assistant, on the other hand, responds verbally with the correct answer and some information from Wikipedia, correctly identifying the answer to my question as “torii.”
Take note again, that question was not context based, nor did it require any sort of permissions to my personal data. It was simply a request to find some basic data on the web. Sure, I can ask Siri how tall Natalie Portman is, and she’ll answer, but that’s just par for the course. If I can ask that, I should be able to ask other general knowledge questions like I can with Google Assistant, and the fact that Siri only sometimes knows the answer to my questions makes me less likely to try it for new things.
Why waste my time attempting to see if Siri will tell me what the fastest production car in the world is when it’s so inconsistent with literally everything else? It will (surprisingly) answer that question, by the way, which is the saddest part of this situation. Siri can do a lot of things that people don’t know about, but none of us are willing to waste our time with trying because of how often it ends up being a waste of time.
Quite frankly, I don’t care if Siri can’t tell me information about flights I booked or if it can’t give me contextual information based on the website I’m currently looking at. Lacking the ability to do both of those things, which Google Assistant can do, is not why Siri sucks. The fact that Siri keeps more data on device is great for users that care deeply about privacy, and I don’t think that’s something needs to change.
Making Siri better is not a matter of privacy versus convenience; it’s a matter of getting consistent performance, being able to do the things that you’d expect of a smart assistant (within the focused space of on-device privacy), and becoming better at answering general knowledge questions.
Iterate is now available on Smashwords in quite a few different digital formats. This is in addition to being available on Amazon in Kindle and paperback. If you’re waiting for a non-Amazon paperback option, I evaluated other storefronts, and for now, it seems like Amazon is the path of least resistance.
Apple held an education event on March 27th, which is probably not a surprise to anyone reading this, nor is it any sort of breaking news that the biggest thing to come out of the event was a new 6th generation iPad with Apple Pencil support aimed at the education market. This iPad is very similar to the low-cost, $329 iPad that was introduced last year, with a bump from an A8 to an A10 (not A10X) processor in addition to the aforementioned pencil support. This new iPad still has a non-laminated display (which means there’s a small air gap), lacks ProMotion (Apple’s adaptive refresh rate technology that goes up to 120Hz), and doesn’t have the smart connector to snap a keyboard on, but for half the price of an iPad Pro, I do think it’s quite a bit more than half of the iPad Pro.
I don’t think a review of this iPad is super necessary, because anyone that buys one knows exactly what they’re getting – a really good iPad at a much easier to digest price point. Again, it’s not the best iPad experience, just a really good one. But this event wasn’t targeted to consumers, it was targeted to education, the market in which Apple (and Microsoft) are losing serious ground to Google on thanks to low-cost, easy to manage Chromebooks and Google Education offerings.
Apple’s sell on this new iPad was to this specific market. For educators, the new iPad is discounted to $299 and the Apple Pencil to $89 (because of course, it’s not included). There was a host of updates to iWork, including some fancy new annotations with the Pencil, announcements of 200GB of iCloud storage per student account, and a few other creative-type things relating to AR and eBooks, but the main takeaway here is that Apple wants to make their offering clear to the education market.
This is an opportunity to not only sell iPads, but to get kids in front of iPads that may otherwise not have been exposed to them, which could create a positive brand association down the line, ensuring future sales. If that sounds dumb, my elementary school had Apple II’s and old Macintosh desktops, and when my parents bought my first computer (a Packard Bell running Windows 3.11), I was a little upset that it wasn’t one of the ones we had at school. Granted, many kids today grow up with tablets, whereas kids from my generation probably didn’t have a computer at home until the late 90’s.
From a school’s perspective, there’s very little doubt that Chromebooks are still a cheaper option. For the price of an iPad alone, you can get a pretty decent Chromebook with money to spare. It may or may not have a touchscreen, but what it will have is a keyboard, and it won’t be nearly as fragile, which means it’s ready to go. Apple’s big sell was pretty heavily reliant on the Pencil (or the $49, education-only Logitech “Crayon” that will be launching soon), and that still doesn’t include a Bluetooth keyboard to type on (since this iPad lacks a smart connector) or a protective case.
Speaking on terms of price alone, the Chromebook clearly wins, but Apple’s vision for the product goes quite a bit beyond the traditional idea of a computer in the classroom (and of course, Apple simply doesn’t compete on price). I’m quite sure there are some really cool merits to giving kids the ability to create eBooks in their classroom, introduce them to coding with Swift Playgrounds, and maybe give them the ability to create music with GarageBand. I would never downplay the importance of these tools, and if the iPad came with a keyboard, this entire blog post could basically be deleted from existence.
However, that’s a major issue from my perspective – this new iPad targeted to education does not come with a keyboard. If we could somehow pretend that keyboards aren’t important anymore and that kids won’t break an iPad without a case, that would knock $99 off the total cost of these devices. See, besides the $49 Logitech Crayon I mentioned earlier, Apple also announced a $99 Logitech-created rugged keyboard case, which brings the total up to $447 minimum to give students an iPad with a stylus and keyboard/protective case. That’s more than double the price of many Chromebooks. Education is a very price-sensitive market, and Apple’s offering is not just double the price, but double the price at education scale. We’re talking $447 times hundreds of students instead of $200-250 times hundreds of students. For a school of 500 kids at $250 a pop, that’s a difference of nearly $100k, and there are cheaper Chromebook options to make that rift even larger.
Keyboards are important. Yes, touch is incredible, and it’s captured and redefined the world via mobile, but most kids will get jobs doing work that is still done with keyboards. Really, the only job I can think of offhand that you can do entirely with a touchscreen is cashiering at a fast food place.
The eBooks Apple wants to help kids create, the coding they’re hoping to inspire kids to learn about – just try doing either of those things on an iPad without a keyboard. You will drive yourself crazy. Kids need to be taught how to type because content creation is severely restricted without a physical keyboard. I like to think I’m a pretty forward-thinking person, and I can’t imagine a 12 year old kid today getting an office job in 10 years that doesn’t require them to create reports, type up long emails, work with spreadsheets, or a whole host of other things that are just incredibly difficult on a touchscreen and oftentimes impossible (read: unavailable) on iPads.
I know how slow corporate adoption can be, and even for the organizations that upgrade quickly, many are bound by the vendors that make the software. Specialized software for niche markets is notoriously bad at updating, which is why so many machines out there still run Windows 7 and even Windows XP.
By not providing kids with keyboards, we’d be setting them up for failure, so these $299 iPads don’t cost $299, they cost $398, which at that point is already double the price of a Chromebook before you tack on the Crayon or Pencil. Yes, Apple doesn’t compete on price, but this is not a market than can absorb Apple’s strategy for competition, save for some of the wealthier private schools (it should be noted that Apple held this event at a public school in Chicago).
Quite frankly, I have some concerns about Chromebooks for a similar reason – that kids aren’t getting exposed to the actual operating system they’ll be using once they get a job (most likely Windows, sometimes MacOS). Hopefully this is an unfounded concern, or I’m underestimating the ability today’s kids will have to adapt from ChromeOS to a similar-but-very-different system, but this same adaptation simply doesn’t apply to my concern about using a real keyboard.
I suppose you can argue that there are plenty of successful people who employ the hunt-and-peck method of typing, but at least this method was honed on a physical keyboard. Maybe they type one third or even half as fast the average WPM of a person that knows how to type properly, but the reality of that situation is still that they are not as prepared or equipped for the real world. Heck, I even shudder to think of trying to take notes in college without the ability to type. I suppose you can handwrite notes with an iPad and a Crayon/Pencil, but personally, I type about three times as fast as a I write.
This isn’t to say that Apple’s education strategy will be a failure for their business, but rather that I hope it’s not a failure for kids or schools. Only time will tell in that aspect, but I guess on the bright side, the new iPad does seem like a solid purchase for consumers.
The progression of advanced technology has rapidly changed our lives, most relatably in recent years with the advent of mobile technologies so advanced that PC purchases have been in decline year-over-year for some time now. Yes, there are far more positive impacts than negative, but that doesn’t mean the negative should be completely downplayed. While there are plenty of debates over what tiny screens in our pockets have done to us socially as a species, there are also many far-less controversial consequences, like distracted driving.
Today, however, I’d like to talk about one of those controversial impacts, and that is the issue of privacy. The world has become very data-driven, for example, by services like Instagram, which is a simple photo-sharing app at face value, but is a huge sales and marketing tool for influencers. We can now buy $50 devices that sit in our homes and react to commands to buy things and turn our lights on, and cameras on our phones are so good that owning a point-and-shoot digital camera these days is practically…well, pointless (pun intended).
But these things come with tradeoffs, and as we move further and further into this data-driven ecosystem, people are starting to become concerned about how much is too much. Is it okay to put an Amazon Echo, Google Home, or Apple HomePod in your home, knowing full well that those devices listen to you and have the potential to accidentally record and store much more data than intended?
We know, almost certainly, that the Amazon Echo is not recording everything that a person says. The way these devices work is by waiting to hear a specific waveform – that of your trigger word (“Alexa,” “Okay Google,” “Hey Siri,” etc) – at which point it records the sound that follows, sends it home for analysis to Amazon, Google, or Apple, then returns a result (this is also how assistants work on your phone). They don’t record all the time (except when an error caused the Google Home Mini to do exactly that), or we’d know by sniffing the network traffic.
The reason I used the phrase “almost certainly” above is that there is always the possibility of the government forcing Amazon (or any of these companies) to record and store all listening data for a single user, but the instant a savvy user noticed increased network traffic and tracked it back to their voice assistant, it would be the biggest news story of the week.
People worry about these devices, and have every right to, because as consumers, we don’t completely understand them. However, because of that very concern, worry is often misappropriated. Take, for example, those that refuse to have a Google Home in their living room, but carry a Pixel 2 on their person at all times, so far as even keeping it on a night stand 3 feet away from their head while asleep. This same concept applies to those that tape over the cameras on their laptops but would never do the same thing on their Galaxy S9.
It’s not a completely black and white issue, but if you are truly concerned about privacy, it would be foolish to take precautions like taping over your laptop camera and depriving yourself of modern home assistant technology, but not also take some kind of precaution with your phone – the most personal device currently imaginable (which, by the way, probably has at least 2 microphones and 2 cameras, minimum).
Consumers are almost always willing to trade a little privacy for convenience, which is why Google’s entire business model of knowing everything about you is working so well for them. Conversely, there are still those that are concerned about these types of things going too far, and Apple has a model that is more friendly to those consumers. Yet, either way, some privacy is sacrificed, or at least, the possibility of complete privacy is given up. Even with a privacy-focused company like Apple, if you backup your phone data to iCloud, the government can subpoena that data from Apple, and they will provide it. But does that mean you should never use a phone?
We should always be wary of what information we give to companies, because whether or not that info improves your experience with a product or service, it is almost certainly also being used to feed complex algorithms with ways to make more money from you. On a personal level, I don’t particularly care if Google knows the area I work in so it can advertise relevant local eateries around noon, but some people take offense with this, even if the company is upfront about it.
Privacy is a difficult and delicate issue, but there is no blanket statement that those of us that understand the intricate details can provide to make the less tech-savvy make better decisions. Unfortunately, I’d guess the lack of brevity tends to make people disinterested. What we need to remember is that we shouldn’t oversell convenience without taking into account what our data is worth, but we also shouldn’t oversell the value of certain data that we can exchange for a greater amount of convenience.
If you’re looking for a takeaway, I’ll say this: being concerned about putting a Google Home in your bedroom is absolutely, completely valid. Do your research, understand how it listens and responds, then make that choice, but also remember that your phone follows you around and your Google Home does not. Mentally separating the two because your phone is more familiar is cognitive dissonance if you truly want to sacrifice convenience for security.
Something a lot of people probably don’t know about me (unless you read my blog) is that I’ve been writing novels for over a decade. I remember the exact moment that I decided I wanted to be an author: I was walking across LSU’s campus (with my long, neatly straightened hair #throwback) to Grace King Hall, and I suddenly just knew. It’s one of those weird memories that will stick with me forever, and I can still picture the lush, green oaks, the old residence hall in view, and I believe it was even cloudy that day.
Yes, writing is a hard game to break into, and I never expect to make more out of it than a hobby, but that doesn’t mean I won’t make the attempt. I know it takes a lot of talent – talent that it is questionable whether or not I have, but I at least have had the perseverance to try.
I started my first novel in 2006 and finished it in 2011, but don’t let that fool you – most of that period was procrastination, and was probably more like 5 or 6 actual months of time spent writing. Also that novel was bad.
I wrote my second novel starting right at the end of 2011 (literally a day before the new year) and finished the rough draft 5-6 weeks later. I actually self-published that one under the name K.J. Holdeman and shared it out via social media like one time. It was the first one I published, and I was – and still am – pretty proud of it. I’m currently re-editing it and will publish it under my real name within the coming months.
My third novel was written in 2015 (might have spilled over into 2016 a little, I can’t remember). It ended up being a convoluted mess, despite being fun to write. I didn’t publish it, and won’t ever.
My fourth novel is Iterate, which I’ve clearly published and am really excited about. This one was a blast to write, and I have a sequel for it in very early planning stages, but I’ve got some other ideas I want to get to before starting that in earnest.
That might sound like the end of that story, but it’s not. I’ve got folders on my computer and posts on my writing blog with dozens and dozens of abandoned first chapters, a handful of outlines, and hundreds of thousands of words of partially written stories that never got finished for one reason or another. And just in case you think that’s an exaggeration…
None of those are the completed novels I didn’t publish (which together are about 110k words). The longest one pictured above was actually a rewrite of something (not pictured) that was already around 20k words (I kept the first ~4-5k words and rewrote the rest). I have so many stories left off around 5k words that I didn’t bother screenshotting below 7k.
My current goal is to continue improving, write more, and become better at marketing myself. Qualified, this means:
Challenge myself. I don’t like doing atmospheric writing, but maybe it’s because I’m not great at it. I need to fix this by creating more atmospheric settings.
I write for about 5 hours a week when I’m working on a novel. I know this because it’s how I spend my lunch breaks. I should dedicate more time on the weekends.
I don’t know how or where to advertise my work except social media. Being a successful self-published author requires marketing, and that means I have to figure this out.
Anyway, I guess that’s probably more than you ever wanted to know about me and my writing journey.