AUGUSTA, Ga. — The Masters is about the experiences.
Experiences for the spectators (“patrons’’ in Masters preferred parlance).
Experiences for the players competing in the tournament.
For years, there were Arnold Palmer experiences for young players privileged enough to play nine holes with him in a practice round or sit with him over lunch on the clubhouse veranda. Everyone who ever met Palmer has an Arnie story to tell.
Later, there were players who got a taste of the Jack Nicklaus experience. Same as with Palmer, chances to rub shoulders with the Golden Bear were cherished, providing memories for a lifetime.
We’ve now entered a period when today’s younger players who qualify to play in the Masters are getting the Tiger Woods experience.
Woods, at age 46, is not yet a ceremonial golfer as Palmer and then Nicklaus eventually became when they grew too old to contend for a green jacket.
Even as he continues to recover from his gruesome car crash in February 2021, Woods remains a player who can still win at Augusta despite the fact he enters Sunday’s final round at 7-over and 16 shots behind leader Scottie Scheffler after shooting 78, his highest Masters score ever, Saturday.
Woods will not win his sixth green jacket on Sunday. He’ll have to be satisfied with this week marking the remarkable completion of his greatest comeback ever from the many physical ailments that have set him back in his career.
He teased us with his 1-under 71 in Thursday’s opening round that had him in contention. But Woods has gone backward on the leaderboard since Thursday, shooting 74 on Friday and 78 on Saturday.
So, we can officially say he’s out of it now.
Woods, who’s spend so many years using his putter as a dagger against his fellow competitors, was done in by his by flat stick and he sounded completely flummoxed by it. He had four three-putts and one four-putt.
“It was like putting practice out there,’’ Woods said. “I mean, it’s like I hit a thousand putts out there on the greens today. I didn’t think I hit it all that bad, but I had absolutely zero feel on the greens and it showed.’’
Woods’ struggle doesn’t diminish him as the most powerful draw in the sport, particularly at Augusta National.
Even in a week like this as Woods played his first Masters in two years, there have been players who’ve had Tiger moments that they’ll never forget.
Cam Davis, a 27-year-old Australian competing in his first career Masters, had an early encounter with Woods he won’t soon forget. Davis was playing an afternoon practice round last Sunday and, with the course backing up, Woods joined him for a few holes.
“I was trying to pick his brain a little bit, but at the same time, just enjoying being in his presence,’’ Davis said of his impromptu brush with greatness. “I’ve met him a couple times. It was the first time I’ve played any golf with him. No fans or anything, just quiet. I hit with Tiger. It was nice.
“He was nice, very approachable, very talkative. I’ll definitely remember it as my first go ’round here with him.’’
Aaron Jarvis, a 19-year-old from the Cayman Islands who earned an invitation to his first Masters with a victory at the 2022 Latin America Amateur Championship in the Dominican Republic last January, had a different experience to that of Davis.
He tried to join Woods during a practice round on Sunday and was given the Heisman.
“I was turning the ninth hole [in a Monday practice round] and I saw Tiger hop out in front of me,’’ Jarvis said. “I ran up to him and ran through the woods and asked, ‘Mr. Woods, are you playing by yourself, or can we join?’ ’’
“I’m just going to play by myself today,” Woods told Jarvis.
“There’s no better ‘No’ or better rejection from Tiger Woods, right?’’ Jarvis said. “I thought would I give it a shot. It was pretty cool seeing him playing in front of me. And after the round, I got to talk to him and Joe [LaCava, Woods’ caddie] for 10 minutes or so, and it was just incredible. It was just incredible talking to Tiger, and hopefully I get to talk to him in the future as well.’’
No one had a better seat for the Tiger Woods experience than Joaquin Niemann, who was paired with him for the first two rounds this week.
“I really enjoyed playing with Tiger,’’ Niemann said. “I know that anytime I’m going to look back on these two days, it’s going to look like a really special moment.’’
GamesBeat Summit 2022 returns with its largest event for leaders in gaming on April 26-28th. Reserve your spot here!
Concurrents is a startup that has been toiling away at a cool graphics technology, and now it is preparing to release Slice, a way to experience a game demo instantly with almost no loading times and limited bandwidth.
The company’s tech gives streamers and game companies an easy way to play and share digital experiences such as gameplay demos.
Freeman noted it’s not a big step to think that Epic’s recent acquisition of Bandcamp to provide “content, technology, games, art, music and more” will lead to further exploration of digital in-person entertainment experiences.
As digital experiences absorb more of our leisure time, that’s where most of the real content we’ll want to share and experience together will live, either through broadcasting, sharing together, or memorializing moments using technologies like NFTs. Concurrents wants to make it easy to share that content.
“We are at a point where we can engage with a game in two seconds with unlimited bandwidth and stream at 35 megabits per second,” said Bill Freeman, a cofounder of Concurrents, in an interview with GamesBeat. “So you’re instantly in the game and flying. We stood up our company Concurrents and we’re coming into the market this summer with a product we call Slice.”
Freeman, who is also president and chief operating officer of parent firm Primal Space Systems, said the company is in talks with several game publishers to use Slice for several new game titles later this year.
Concurrents’ goal is to fundamentally reimagine how to interact with deeply immersive experiences, making it easy to jump between experiences as well as to create and share content with others. The goal is to get players into an in-game experience, one built with hundreds of gigabytes of assets, in seconds.
Concurrents is building a tool suite at Slice.gg for game publishers, influencers, and budding streamers to collaborate on content creation. Slice.gg provides publishers and streamers with the ability to manage the Slices that get pushed to their communities through game distribution channels (Steam, etc.) and social media partners. The vision is to make game distribution and social sharing part of a single strategy for game publishers and the streamers they work with.
Slice.gg is a platform where the publishers and influencers will come to get the tools. This is where a publisher can push out various game slices to the players. The company is preparing to show off the tech and raise more money. Slice.gg will be launching in early fall 2022.
Early technology
Concurrents was born from a couple of companies that have been working on the tech for more than a decade. Freeman did a live demo for me in February 2020.
Primal Space Systems raised $8 million a while ago for its subsidiary Instant Interactive, and it used it to make a technology dubbed GPEG, which is like a cousin of the MPEG format used to run videos, but for graphics.
Primal Space Systems holds 13 patents and is expanding its focus on the geospatial applications for GPEG with both commercial and military focus. Instant Interactive is a business unit under Primal Space Systems focused on the entertainment market (animation, anime, live event, and all non-game content developed in game engines and could benefit from the GPEG streaming protocol.
But GPEG, a content streaming protocol, is a different way of visualizing data, and its creators hope it could be a huge boost for broadening the appeal of games as well as making people feel like they can be part of an animated television show.
The idea for the Geometry Pump Engine Group (GPEG) originated with Instant Interactive cofounders Barry Jenkins (a medical doctor who became a graphics expert), John Scott (chief technology officer and formerly of Epic Games), and Solomon Luo (a medical vision expert and chairman) — who have thought about this challenge for years and created the startup, Primal Space Systems, and its entertainment-focused division Instant Interactive. Now the tech for the game market is part of Concurrents Inc.
The underlying technology that powers this creativity and instant shareability is GPEG. GPEG prefetches the assets needed by the game (texture tiles, geometry clusters, etc.) directly to the client-side game engine for local rendering.
The GPEG protocol is implemented as game-engine plugins including encoder, server, and client software components. The GPEG server software monitors the player camera position in the game in real time and prefetches the pre-encoded packets to the client-side game engine typically hundreds to thousands of milliseconds before the assets are needed by the client-side game engine.
At GDC last week, Concurrents delivered private demos of a very detailed 3-gigabyte game level being streamed over the rather limited convention center public WI-FI (and in some cases using 4G cellular hotspots) using GPEG. The GPEG stream allowed this game to be playable in 7.2 seconds at 180 frames per second with no compression artifacts at 1440p on a gaming laptop, said Jenkins.
“The game assets were streamed in real-time from a low-cost server in Oregon (no server GPU necessary), yet there was zero added latency because each of the game’s granular sub-assets (texture tiles and geometry clusters) were intelligently prefetched in real-time at least 2000 milliseconds before it was needed by the client game engine,” Jenkins said.
In the one week since GDC, Concurrents has further improved the streaming efficiency for this game, allowing the same game to be started in under five seconds at a third of the bandwidth of last week’s demonstration.
Concurrents
Concurrents is one of a new crop of companies that are looking past the constraints and costs of video-based game cloud gaming (as Google’s Stadia business has seen). In Concurrents’ case, the strategy is essentially to make the game engine a kind of media player, making it possible to stream actual game experiences to players within seconds.
The company’s CEO, Warren Mayoss, says the analogy is an important one. He said in a statement, “Once we recognize that game content can be accessed and shared as easily as other media, it’s not a big step to imagine sharing our in-game play and reactions as immersive content rather than through video.”
The company’s first product is targeted at game publishers. A Slice TM of game content will make it possible for publishers to release game trailers and demos with fully interactive, targeted pieces of game content that will be available to player in seconds.
With publishers onboard, Concurrents hopes to bring streamers onto their platform, offering them a way to create immersive, engaging content that will be instantly shareable and remain interactive for their followers and fans. That doesn’t just mean that streamers can share demos to games: They’ll be able to record their own Slices as replayable pieces of content, offering fans the ability to enjoy game replays and walk-throughs in a fundamentally new, fully immersive way.
Barry Jenkins, Concurrents’ CTO, said in a statement, “Creating slices of gameplay is fun and easy to do. Because the technology can record gameplay events at high framerates using very little data, a whole world of new, cinematic content creation is possible. Things that are not possible in video editing like user-controlled cinematic drone cameras as well as ultra-slow-motion and bullet-time effects can become routine.”
He added, “And each of these cinematic slices can be shared with friends and followers in a format that, unlike video, invites interactivity and further creativity. Concurrents’ slice recording and editing will transcend the limits of video streaming and editing.”
The company has 18 people now.
Handing out slices
The tech could be useful to companies as they promote their games through playable demos.
“At its core, it’s instant access to gameplay. And so it is immersive content, not video. And we want people to be able to share content derived from the games,” Freeman said.
The company is working with game publishers to create slices of games that are easy to socialize and put out into the market. It’s a playable “slice” of a game or just a demo. But it’s not passive like a game trailer in a video. Rather, it’s a playable version of a scene in a game, and it can be shared as well.
“You can instantly jump in and play a portion of the game,” Freeman said. “An influencer can record their gameplay and share it with the community.”
The influencer can share that slice with the community, and players can unlock it and play it. It can be attached to non-fungible tokens (NFTs), which can be tied to the slice and be used as a kind of currency to unlock the gameplay. Players can float gameplay cameras above the gameplay and record their own cinematic experiences in the game slice.
“We’re talking to a variety of different publishers right now, to do the first playable slices and work with their influencers,” Freeman said. “We can work with any game type and figure out which game slices are going to convert people to purchase the game.”
In contrast to cloud gaming, the GPEG-based tech renders the graphics on the local machine. So you don’t have any problems with latency or interaction delays.
“It’s instant access,” Freeman said. “There’s no lag. There are high frame rates. But it’s a fraction of the bandwidth used. So when you run this technology, when you encode this way, you’re playing a game and for a large part, you’re really using five megabits of bandwidth to play a game. Now you’re going to have some peak moments when you’re entering into new areas, as you get some peaks on that.”
The company is working with the Unreal engine and it does predictive streaming or figuring out ahead of time what assets need to be pushed out to the local client or be rendered. Freeman showed me a demo of a scene from a game where the tech was working.
“If you’re at home, this behaves just as if it’s a fully downloaded game,” he said.
GamesBeat’s creed when covering the game industry is “where passion meets business.” What does this mean? We want to tell you how the news matters to you — not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Learn more about membership.
GamesBeat Summit 2022 returns with its largest event for leaders in gaming on April 26-28th. Reserve your spot here!
Nvidia and Kroger today announced a “strategic collaboration” designed to bring more AI-powered applications and services to the grocery realm.
The duo announced the new partnership today at GTC 2022, Nvidia’s annual developer-focused AI conference.
As the largest supermarket chain in the U.S. by revenue, Kroger needs little introduction. But as with many “traditional” brick-and-mortar retailers, Kroger has had to move with the times and embrace technology to connect with consumers where they prefer to transact — today, Kroger claims third spot in terms of online U.S. grocery sales, after Walmart and Amazon.
Kroger has been investing heavily in its modernization efforts, which has included partnering with robotics company Ocado, and teaming up with Microsoft to develop data-driven grocery stores. And its l atest partnership fits neatly into those other recent initiatives.
Digital twins
Kroger and Nvidia revealed plans to build an AI lab and “demonstration center,” to improve its shipping logistics and in-store shopping experience.
Part of this will entail creating so-called “digital twin” simulations, which will reflect actual store layouts. Digital twins are essentially virtual replicas of a real-world entity, and are used to predict how a particular product will perform through real-time data — for Kroger, it’s all about optimizing its in-store efficiency and processes.
The new lab will be housed at Kroger’s HQ in Cincinnati, and will leverage several Nvidia products including its AI Enterprise software suite, Omniverse Enterprise for digital twin simulations, and ReOpt to optimize logistics. From a hardware perspective, the lab will include nine Nvidia DGX A100 systems, InfiniBand networking, and RTX workstations.
Collectively, they will help garner big data insights from thousands of stores across the U.S., including finding earlier indicators of “deteriorating freshness” using computer vision and analytics. They will also work together to optimize routes for last-mile delivery between the point of production (e.g. farm) and the customer’s home.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More
GamesBeat Summit 2022 returns with its largest event for leaders in gaming on April 26-28th. Reserve your spot here!
The main delight of Resident Evil Village is Lady Dimitrescu, the gigantic vampire who has become extremely popular among cosplayers, fan artists are more. So it was interesting to see Maggie Robertson, the voice actor for the larger-than-life Lady Dimitrescu, win the award for outstanding achievement in character at the recent DICE Awards.
Last week I joined a group of journalists in the offstage winners’ room for the DICE Awards celebrating the best of video games in 2021 at the DICE Summit in Las Vegas. Each group of winners filed through our room and we collectively tossed a bunch of questions at the winners.
I asked the final questions, while other journalists asked the rest. Here’s an edited transcript of our interview.
Question: What do you think, as an actor, you’re able to do now after working on these games that you weren’t able to do before?
Maggie Robertson: Well, I think a theater background is so important, especially in this kind of performance capture work, because it’s about the storytelling of your body. You don’t have sets or costumes or makeup to tell the story for you. You just have your body. I like to do a lot with animal work, animal studies. If you look at Lady Dimitrescu, she’s kind of cat-like. She’s going to be more sensual and curvy. She takes her time. Using that to create more of a distinct physicality for each character can also help you create really clear physical characters very quickly. You can use that as a jumping-off point.
Question: When your character was revealed, it set the internet on fire. What was that like for you as the person behind that performance?
Robertson: Oh, God, it was so strange. Especially strange because I was still under NDA. I couldn’t say anything. I couldn’t tell my roommates. I couldn’t tell my mom. I was just freaking out in my room by myself. It was so surreal, and so much more than anything I could have imagined.
I’m incredibly grateful for it. It’s given me a platform to create a safe space for lots of different communities, like the LGBT+ community. I love that. That’s been the greatest honor and privilege, and a totally unexpected one. It means a lot to me to be able to give back and provide a safe space. I love that Lady D is loved.
Question: When you think back to when you were first getting to know your character, what stood out about her? Did you get a sense that she’d stand out in a series like Resident Evil?
Robertson: Well, she stands out anyway, but–I love the character design for Lady D. The very first time I saw her, she’s so physically distinctive. What I think Capcom has done such a wonderful job with is creating an image that already indicates so much character. You just look at her, and even before she opens her mouth, she slaps you across the face with her character. Again, they just make my job so easy. I looked at the image and thought, “Oh, great. I have 10,000 ideas now about what to do and who she is.” She tells a very clear visual story.
Question: Among all the reactions you’ve gotten from playing this character, have you ever been creeped out or harassed by people? How do you deal with that?
Robertson: Oh, totally. Listen, she was quite the phenomenon when she first came out. I was nervous about that going into it, before the release even came out. I was nervous that I was going to be getting that kind of thing as the majority of the interaction, because she is so fetishized. But I will say that the community’s been really amazing. There’s no escaping the fact that you’re a woman on the internet. That stuff exists. But the overwhelming reaction of the community has been positive.
The first things I received were people reaching out to talk about my work and how much they appreciate what I did in the game. And oddly I get a lot of strangers writing to tell me that they’re proud of me after I win these awards. They’re writing to say, “We’re so proud of you, genuinely so proud.” That’s touching, very moving. It’s been lovely, actually, the reactions.
Question: Has playing a character who’s gained so much maybe unexpected renown in video games–has that opened additional doors for you? Or, conversely, has it been a thing where people reach out to you saying, “We’d like you to play a character that’s like her, but a little different”?
Robertson: It’s so interesting. Time will tell, because I don’t know–this is my very first entry into the world of video games. I happened to get an agent a week before the game came out, so it’s hard for me to tell if my new auditions and new bookings are because I have this shiny new agent, or because I have this shiny new award. Either way I’m very happy about them. But I think time will tell. This is a very small industry. Relationships matter. I’m grateful to have worked on this game with other creators and collaborators that I want to work with again, who treat people well and are creative and always open to new ideas, always willing to work with you and not just at you, telling you what to do. I value those relationships, and I hope they continue to grow.
Question: Is it weird for you that the face of the character is someone different?
Robertson: I find it rather liberating, to be honest. It allows me to have that separation, so that I can now watch the game and experience the game as a fan myself, as an audience member. I’m not overly critiquing my own performance. Especially in terms of the fetishization and this reaction we’re having to her–I wonder how Helena Mankowska, the face model, feels about it. But I enjoy that degree of separation. It allows me to have that bit of space and the safety around it. I can just enjoy it.
GamesBeat’s creed when covering the game industry is “where passion meets business.” What does this mean? We want to tell you how the news matters to you — not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Learn More
Join today’s leading executives online at the Data Summit on March 9th. Register here.
A new study from ExtraHop shows a major discrepancy between perception versus reality — 77% of IT decision-makers (ITDMs) said they were very or completely confident in their company’s ability to prevent or mitigate cybersecurity threats, yet 64% admit that their own cybersecurity incidents are the result of their own outdated IT security plans.
When the pandemic hit and organizations switched to a work from home (WFH) model, many also took the opportunity to modernize their IT infrastructures, finally decommissioning old on-premises applications and replacing them with new SaaS applications or other solutions. Unfortunately, they didn’t modernize their protocol use — leading to some misplaced confidence. Sixty-nine percent are transmitting sensitive data over unencrypted HTTP connections instead of more secure HTTPS connections. Another 68% are still running SMBv1, the protocol that WannaCry and NotPetya ransomware variants use to infect corporate networks.
The frequency of ransomware attacks over the past few years has only made this discrepancy worse. Eighty-five percent of companies are, on average, experiencing at least one ransomware attack per year, and 74% have experienced multiple attacks.
Another surprising takeaway: most companies admitted to paying the ransom when hit. Seventy-two percent of respondents admitted to paying a ransom while 42% of companies that suffered a ransomware attack said they paid the ransom demanded most or all of the time.
Despite this being discouraged by the FBI, many organizations choose to make the payment to minimize the cost, which includes business downtime and end-user downtime.
The survey of 500 security and IT decision-makers in the U.S., U.K., France, and Germany was conducted by Wakefield Research and sponsored by ExtraHop. Survey participants came from a wide range of industries, including financial services, healthcare, manufacturing and retail, and worked at companies of varying sizes, including companies with annual revenue exceeding $50 million.
Read the full report by ExtraHop.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More
Democratic gubernatorial hopeful Jumaane Williams has tapped a lefty activist who supports the abolition of ICE to be his running mate for lieutenant governor.
Ana Maria Archila, of Queens, is a co-founder of the immigrants advocacy group Make The Road NY and has worked at the left-of-center Center for Popular Democracy, directing campaigns for paid sick leave, raising the minimum wage and aiding immigrants, including those brought here illegally.
Organizers of both groups have close ties to the Working Families Party, which recently came under fierce criticism for querying candidates seeking its endorsement whether they would refuse to accept donations from unions representing police and corrections officers.
The Colombia-born Archila’s claim to fame was confronting former Arizona Sen. Jeff Flake in a Capitol elevator about supporting then-Supreme Court nominee Brett Kavanaugh in 2018.
“I have two children. I cannot imagine that for the next 50 years they will have to have someone in the Supreme Court who has been accused of violating a young girl,” she told the senator.
Rep. Alexandria Ocasio-Cortez then invited Archila to be her guest at then-President Donald Trump’s State of the Union address in 2019.
She has supported the #Abolish ICE movement against the US. Immigrations and Customs agency that enforces the nation’s immigration laws and detains and deports illegal migrants.
“The demand to abolish ICE has existed almost since the beginning of ICE,” Archila, then co-executive director of the Center for Popular Democracy, told Refinery29.
“Since its creation, there were organizations that were saying that the inclusion of ICE as an agency that is designed specifically to separate families, put people in detention, to deport them is a dangerous development in the way we as a country relate to migration.”
Williams, who along with Long Island Rep. Tom Suozzi is challenging sitting Gov. Kathy Hochul for the Democratic Party nomination, said Archila “has spent her entire career fighting the uphill battle to support and uplift everyday people—and she has won.”
“Together, we will make the change that is needed to make New York more affordable, livable, and equitable for every family,” he said.
Hochul’s running mate is Lt. Gov. Brian Benjamin, who served as the state senator representing Harlem. Hochul appointed Benjamin lieutenant governor after Hochul assumed the governorship when three-term Democrat Andrew Cuomo abruptly resigned in a sexual harassment scandal.
Suozzi named former Brooklyn Councilwoman Diana Reyna as his running mate.
Hochul is the heavy favorite to win the Democratic nomination, according to recent polls, with Williams running to her left and Suozzi from her right.
Reporter Door was invited to visit Star Wars: Galactic Starcruiser, Disney World Resorts’ new high-concept hotel experience in Florida, during a press preview event last week. One of the bullet points that really sold fans on the facility in 2019 was a lightsaber training experience, one that promised to evoke the earliest scenes of Luke Skywalker swinging a laser sword way back in 1977. During my visit, I gave it a try and found it to be, technologically speaking, kind of tame. But, like much of the fun to be had aboard the Halcyon, it’s more about the vibe than anything else.
One of the first major demonstrations of the lightsaber training experience showed up on YouTube just three months ago. In it, you can clearly see how the system works. Participants stick a special lightsaber into a beam of light and, if they time it correctly, the lights flash and the saber vibrates. Shields also play a role in the experience, adding more tactile participation while helping to keep fingers and hands from getting whacked on the backswing.
Honestly, it really doesn’t look anything all that much like it does in the movies. It doesn’t even look much like the early concept art. The reason for that, I hope, is fairly self-explanatory.
Laser swords aren’t real, and even if they were, there’s not an insurer in the world that would let guests wield a weapon that could cut through metal. Also, while laser weapons are real, their beams don’t form coherent bolts of light that zip through the air like tracer rounds. Basically, the laws of physics take a lot of the fun out of the lightsaber training experience, making it feel a lot like reverse Laser Tag. But according to Disney’s creative director, Sara Thacher, this was still a major step forward for the tech.
“This is the maximum, epic challenge,” Thacher told Reporter Door. “When we started the project, [we noted] there are many, many amazing VR lightsaber experiences. Those are great, but they are very hard to share with the people that you care about — to be there together, to be experiencing the same thing together.”
As Thacher describes it, the lightsaber training experience that was finally implemented on board the Galactic Starcruiser is a bit of a compromise. It focuses on safety, by having everyone face forward and by not having participants spar against one another. The technology works; I can personally attest to that, and she said that’s in no small part thanks to the legendary Disney Imagineer Lanny Smoot who protoyped the concept nearly a decade ago — before the Galactic Starcruiser was even on the drawing board. But it’s more of a team-building experience than a whizz-bang special effects extravaganza.
Your guide during the lightsaber training is a Saja, an actor portraying one of the descendents of the Guardians of the Whills introduced in Rogue One: A Star Wars Story. They are essentially Force-sensitive refugees who have found a home on board the Halcyon. Their message during the lightsaber training experience is simple, but impactful: It’s our duty to protect each other, and we are stronger together than we are alone. The Saja give the experience its heart — and they help to tie it into the larger storyline of the two-day immersive experience as a whole.
“The actors are so essential,” Thacher said. “From early, early playtests forward they have all been with an actor. We’ve been continually working on that, because what you notice and how you feel doing it is as much about what the what the technology of the room is telling you, and the game part of the room is telling you, [and] it’s about what that person is telling you and how they guide your focus changes your experience. So that script, and how they interact with you, is so integral. We found we could not play test them separately.”
Viewed in that way, the lightsaber training is just one part of the whole. The Saja guiding you in that room feels as real as any other passenger on the ship. They’re someone that you can talk to and role-play with all throughout your stay. While those actors are off-stage, the Play Disney Parks app takes over, allowing guests to use the Data Pad to reinforce the lessons learned during training. The app can even help guests to unlock unique narrative experiences, including additional training in the Force and even a visit with Jedi master Yoda himself.
Still, for Star Wars fans burnt out on an uneven prequel trilogy or jaded by the prospect that they might never be able to afford the hotel’s roughly $5,000 price tag, this can feel like another disappointment.
Brittany Matthews’ bachelorette party has been an experience for the Chiefs’ WAGs, and the party isn’t over just yet.
After Matthews and her bride tribe enjoyed goat yoga and a boozy boat day, the group let loose during a night out on the town.
It’s unclear where Matthews — the fiancée of Chiefs quarterback Patrick Mahomes — jetted off to celebrate her bachelorette party, but there has been no shortage of highlights.
Kayla Nicole, the girlfriend of Chiefs tight end Travis Kelce, has been documenting the pre-wedding festivities, and shared a video of herself lounging on a beach.
According to videos posted to Matthews’ Instagram story, the group did a workout, courtesy of “Britt’s Bach Bootcamp.” Matthews is a certified fitness trainer with a Bachelor’s degree in Kinesiology. She runs her own fitness company that offers online workouts.
The ladies hit the town for a night out with Matthews’ alter ego “Blaire,” as seen in a post on her Instagram. The bride-to-be wore a long, pink wig and a white fringe dress.
The group played “the panty game,” a popular bridal shower game in which guests are asked to bring a unique pair of undies for the future bride.
The festivities also included a movie night, in which they watched “Marry Me,” starring Jennifer Lopez and Owen Wilson.
Matthews kicked off her bachelorette party last Thursday with her best girlfriends, including Tyrann Mathieu’s fiancée Sydni Russell.
Mahomes also kicked off his bachelor party last Thursday in Las Vegas, with teammates Kelce, Jerick McKinnon, Clyde Edwards-Helaire and Orlando Brown.
The high school sweethearts are set to marry sometime in 2022. They celebrated their daughter Sterling’s first birthday earlier this month.
Join today’s leading executives online at the Data Summit on March 9th. Register here.
This article was contributed by Ramu Sunkara, CEO and cofounder of Alan AI, and Andrey Ryabov, CTO and cofounder of Alan AI.
You created the next killer app, and you’re a few steps away from making history. As soon as you roll out the first version, users will fall in love with it. They will recommend it to their friends, network effects will kick in, put you ahead of your competitors, and ensure your success. All you have to do is figure out how to make the app user-friendly.
It sounds easy, but that last part, the user-friendliness, is easier said than done. And it happens to be one of the most important and most difficult parts of creating products.
As anyone with experience in the software industry can attest to, users’ reactions to the first version of your application will likely be very different from your expectations. You’ll witness confusion, frustration, and churn as users struggle to figure out how to use your application and experience its true value.
First impressions are very important. When you launch a new application, you have a very small window of opportunity to learn from your users and adjust. You must identify pain points and continuously adjust the application’s interface to make sure your users receive the optimal experience.
Previously, this endeavor was a painful and slow process, requiring expensive changes to the graphical user interface and hoping that it works out. Fortunately, with the advent of a new generation of app-centric, AI-powered voice assistants, the equation is about to change.
Why do good applications fail?
The gap between developer vision and user experience is the reason why many applications die. A relevant case study is Hipstamatic, the application that first brought the idea of photo filters in 2009. While Hipstamatic had an excellent idea, it had poor design choices, the user interface introduced a lot of friction, and it missed features that would have made it appealing to users.
Hipstamatic failed to learn its flaws and fix them in time. As a result, it gave its way to Instagram, a lesser-known app that was much more appealing to users and later became acquired by Facebook for $1 billion.
Hipstamatic is one of many examples of good products that die because their teams don’t learn to adapt to their users’ needs and preferences. Today’s applications — especially in the enterprise and workplace domain — have very complicated user interfaces and features. It is very easy to confuse users and hard to find the best layout that will put the right features front and center.
Creating the optimal user interface and experience hinges on two key factors. First, developer and product managers need the right tools to gather relevant data and learn from users’ interactions with their application. And second, they need the tools to quickly iterate and update their user interface.
Wealthy software companies can overcome these challenges by hiring many developers working in parallel on different versions of an application’s user interface. They can roll out and manage complicated A/B/n tests and hire analytics experts to steer their way toward the optimal user interface. They might be able to afford expensive in-person studies and surveys to spot the reasons users leave the conversion funnel.
But for a small startup that is burning investor cash and has limited time and resources, learning can be too expensive — which is why many developers resort to launching their app and praying that it works.
This is about to change with the new generation of voice assistants.
Improving the user journey
First impressions and experience of an app will have a profound impact on users’ retention. If a user quickly finds their way around the interface and gets to experience the app’s true value, they will likely use it again and recommend it to their friends. If they get confused, there’s a likely chance they will become disenchanted and divert their attention to something else. The problem is, however, that users come with different backgrounds, experiences, and expectations. You’ll rarely find a user interface that appeals to all your users.
Now imagine a voice assistant that is deeply integrated in your application and can guide the user through the features. If users are struggling to find something in the app, they can just ask the assistant and it will either take them there or guide them to it. This can be extremely helpful in the onboarding process, where users often become confused and need guidance. As users become familiar with the application, the assistant’s role will gradually change from guidance to optimization, helping them automate tasks and take shortcuts to their favorite features. In applications where users need hands-free experience or quick access to information, the in-app voice assistant will become an invaluable interface.
The in-app voice assistant provides unprecedented flexibility to adjust the application with the user’s level of knowledge, experience, and expertise. You can’t create a user interface that appeals to every single user. Accordingly, you would need limitless resources to create numerous versions of your application to appeal to every user. A voice assistant, however, can act as a dynamic interface that can be used in various ways, providing each user with a unique experience.
Basically, instead of having your users adapt themselves to a very convoluted user interface, having an in-app voice assistant makes a simple user interface that adapts to your users.
For both new and experienced users, the voice assistant can be a huge differentiating factor that can improve conversion and retention rates.
Improving product development and management
The flipside of the user experience is the product development and management process. Here, time is of the essence. Your success largely depends on how fast you can get feedback from your users, learn from their experience, and adjust your application.
Having an in-app voice assistant is the closest thing you can get to being physically present when users are interacting with your app. As you gather voice and app analytics data, you’ll be able to answer pertinent questions such as “On which pages are users getting stuck?” “What features are they struggling to find?” “What are the most asked questions?” “What features do users expect the app to have?” Through this data, you’ll be able to glean important behavior patterns that will steer you in the right direction.
Discovering users’ needs is one side of the equation. Responding to them is another and equally challenging part of creating good products. The classic product development paradigm requires you to redesign your application’s user interface, submit it to app stores, wait for it to be vetted and published, and then roll it out to users. For web applications, you’ll have to go through multiple designs, run A/B tests, choose the best new design and then roll it out to all users.
With in-app voice assistants, the interface is already there, so in most cases, you won’t need to make any change to the graphical interface and can roll out new features on the server side with minimal friction.
In-app voice assistants provide a smooth shortcut to the finish line. Instead of feeling your way through the dark, you’ll be casting a bright light on your app and will be able to direct your resources in the right direction with a laser focus. A lot of time and money will be saved. Instead of taking weeks or months to deliver new versions of your app, you’ll be able to iterate several times per week or even per day.
Why now?
Voice assistants have been around for a decade. So why should you be focusing on in-app voice experience now?
There are a couple of reasons. First, the first generation of assistants such as Siri, Alexa and Cortana have helped bring about wide acceptance of voice user interfaces. Today, a wide array of consumer and industrial devices support voice assistants. Millions of families across the world use smart speakers and other voice-enabled devices. Voice accounts for a substantial share of online search queries.
At the same time, first-generation voice assistants have distinct limits that make their use limited to simple tasks such as invoking apps, reading emails, online search, and setting timers. When it comes to specialized, multi-step tasks, classic assistants are of little use and can’t keep track of user context and intent. These assistants live outside applications and are tied to their vendors’ platforms. They are separate from the application’s graphical interface and are blind to the user context, which makes it impossible fully understand user intent and provide visual feedback to users.
The shortcomings of current voice assistants is especially evident in the enterprise sector, where companies are spending millions of dollars to build mobile and web applications for their internal workflows to improve productivity. These applications can largely benefit from voice assistant support but only if it’s tightly integrated into the special workflows that support these businesses.
To solve these challenges, the next generation of voice assistants will live inside applications and will be deeply integrated with the app’s user interface, workflow, taxonomy, and user context. This shift in architecture will enable developers to use various data sources and contexts to improve the quality and precision of in-app voice recognition and language understanding. Users will see the voice assistant leverage the existing UI to confirm that it has understood and documented their input correctly, and this will help to avoid the friction and frustrations that happen when older voice assistants are applied to complex tasks. This new generation of assistants makes it possible for voice to become an integral part of the app experience.
The new era of voice user interface is just beginning. This is a great opportunity for developers and product managers to make sure their great ideas become great and successful applications and create significant ROI, especially in the enterprise sector.
Ramu Sunkara is the CEO and cofounder of Alan AI.
Andrey Ryabov is the CTO and cofounder of Alan AI.
DataDecisionMakers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own!
OlliOlli World, the delightfully offbeat skateboarding platformer, launched a few weeks ago on basically every gaming console you could ask for. It’s a clean break for the series, taking familiar gameplay but putting it in a totally redesigned world that allows for more exploration, competition and tricks.
You may not recall, but the original OlliOlli was released in 2014 exclusively for the PS Vita before hitting more platforms over the following years. That was my first exposure to the game, and I played it non-stop whenever I traveled; for a few years, the Vita was a constant companion on work trips and vacations alike.
I sunk untold hours into the two OlliOlli games on the Vita, mastering almost everything they threw at me. (I was never able to hack the insane “Rad” mode, where you had to make every single landing perfectly or else you’d slam and have to start the level over.) So while I was thrilled to try OlliOlli World on the PS5, I’ve also been wondering how it works on the Switch — would this be my new on-the-go gaming addiction, or do the compromises of playing on aging hardware degrade the experience?
After a couple weeks, I’m happy to say that OlliOlli World looks and plays great on the Switch. Still, there are a few things you’ll want to know as you decide which platform to buy it on. Of course, the game gives up some visual fidelity on the Switch — as with all games, 1080p when docked to a TV and 720p on the console’s built-in display is as good as it gets, a far cry from the beautifully detailed 4K visuals you’ll get on the PS5 or Xbox Series X. OlliOlli World on the Switch does target 60 fps, similar to other consoles.
None of these changes are surprising; we all know the Switch is less powerful than modern systems. But fortunately, these changes largely don’t make a difference. The character models of your skater, as well as the many people you meet across the skateboarding haven of Radlandia, are indeed less detailed on the Switch. What’s most important is that the game’s gorgeous art style still shines. OlliOlli World is one of the most vibrant games I’ve ever played, and it looks especially vibrant on the OLED Switch’s screen. While it took me a few minutes to adjust to the lower-resolution experience here, I mostly didn’t think about it once I got down to the game’s core skateboarding action.
The difference in frame rate is more noticeable. OlliOlli World is an extremely fast game, one that really benefits from running at 60 fps. But despite the fact that developer Roll7 targeted 60 fps for the Switch, there were times that I felt like it dipped even below 30 fps. Roll7 did a great job of making the Switch version feel smooth enough that gameplay isn’t usually impacted, but sometimes the game would drop frames in a crucial moment that led to me unceremoniously slamming after a trick. The vast majority of the time, things stayed steady enough that it didn’t impact my gameplay. But there’s no doubt that you’ll notice dropped frames compared to how the game plays on the PS5.
I also came across frame rate drops in other parts of the game, like the animation that happens when your skater kicks off a run, or the loading screen transitions that take place when moving from the map into a level. These don’t affect gameplay, but they’re hard to ignore and add to the feeling that the Switch struggles a bit to keep up with the action. But the fact that the frame rate usually stays solid when you’re on a course is far more important.
Probably the most significant compromise that comes when playing on the Switch are the Joy-Con’s relatively tiny analog sticks. Compared to the spacious sticks on PlayStation and Xbox controllers, it’s a bit harder to pull off the game’s more complex tricks when playing on the Switch. Again, though, it’s not a deal-breaker. I’ve thrown down plenty of impressive runs and beat nearly every single challenge the game has thrown at me over the course of dozens of levels.
That said, I’m getting far enough into OlliOlli World on the Switch that levels are getting increasingly difficult, and I’m a little worried about keeping up with the more difficult levels that’ll come over the two worlds I have yet to conquer yet. I’m confident that I’ll be able to make it through basically any level the game throws at me. But each level has a number of specific challenges you can optionally complete — to truly master those, I might end up docking my Switch to the TV and playing with the Switch Pro Controller, which has much better analog sticks than the Joy-Con.
On the other hand, the PS Vita analog sticks are even smaller than those on the Switch, and I eventually mastered two OlliOlli games on that system. There’s no doubt that bigger controllers make pulling off the game’s tricks more comfortable and probably easier, but OlliOlli World is still extremely playable on the Switch.
To sum it up: there are a handful of compromises across graphics and gameplay if you choose to play on the Switch rather than a more powerful console. But I don’t think that they should stop you from playing the game on Nintendo’s handheld. It’s a great pick-up-and-play game, the kind of title you can spend a rewarding 10 minutes with or get lost in for multiple hours. The experience is a little more refined on Sony and Microsoft’s more powerful consoles, but you can’t easily take that on the go with you. If you don’t care about that, snap it up on the PS5 or Xbox Series X / S. But if you’re looking for a game that’s at home both on your TV and away from it, OlliOlli World on the Switch fits the bill perfectly.
All products recommended by Reporter Door are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.