ILMxLAB is again teaming up with VR arcade company Nomadic to bring a new version of the ‘Lightsaber Dojo’ from Vader Immortal: A Star Wars VR Series to locations worldwide.
With the help of Nomadic, Lucasfilms’ immersive entertainment studio ILMxLAB brought an arcade-style Lightsaber Dojo experience to pop-up installation at select locations late last year. Now the studios are partnering again to produce a version inspired by the Lightsaber Dojo in the third, and final episode.
There’s no word on where you’ll be able to play just yet, however Sean Griffin, President of Nomadic, says Star Wars fans “worldwide” will be able to experience it. The studios’ Lightsaber Dojo from last year, which was based on the mini-game from Vader Immortal: Episode II (2019), was held in pop-up locations across North America.
“The ILMxLAB team is excited to bring a new chapter to the Lightsaber Dojo experience,” said ILMxLAB Senior Producer, Shereif Fattouh. “Fans will soon have the unique opportunity to battle Darth Vader himself using all their abilities in the ultimate test of skill.”
Just like in the Lightsaber Dojo from the third episode, fans will get a chance to handle lightsabers, blasters, and use the Force to combat waves of droids, stormtroopers, and other creatures before a climactic fight with Darth Vader. Check out some gameplay footage of the original Lightsaber Dojo III below, courtesy YouTube channel ‘cycyclist’:
“Nomadic is thrilled to once again partner with ILMxLAB to bring guests into the world of Star Wars,” said Sean Griffin, President of Nomadic. “Our new form factor will deliver a more immersive experience for players, will provide a social viewing experience for their friends and family, and is more efficient for family entertainment centers and arcades to operate. We can’t wait for Star Wars fans worldwide to experience it.”
There’s no word on which VR headset arcade-goers will play on, as the studios only make mention of “a top-of-the-line virtual reality headset,” however it’s likely that due to ILMxLAB’s previous engagements with Oculus that it will be an Oculus Quest 2 headset.
The studios says details on availability and pricing of Vader Immortal – Lightsaber Dojo will be available at a later date. We’ll have our eyes peeled in the coming weeks.
September 30, 2020 at 08:47PM
https://ift.tt/3l0mQp9
‘Star Wars Lightsaber Dojo’ VR Arcade Experience Coming to Locations Worldwide - Road to VR
Samsung’s Good Lock suite of customization tools recently received two exciting new modules — Pentastic and Wonderland. In case you missed our coverage of these modules, the new Pentastic module lets you theme the S Pen experience on your Samsung device and the Wonderland module lets you create live wallpapers from any still image of your choice. Now, Samsung is teasing yet another Good Lock module, called Keys Cafe, that will let you personalize the Samsung Keyboard app.
As per a recent press release from the company, the upcoming Keys Cafe module will let you customize the Samsung Keyboard app as per your preference. It will include support for keyboard height and width adjustment, key layout repositioning options, a wide range of themes to customize the look of the keyboard app, and dynamic effects that will bring your keyboard to life.
Additionally, the Keys Cafe module will come with a gamification feature that aims to help you minimize typos by turning typing into a game. The feature will give you a score and ranking based on your performance, which is expected to help you improve your typing accuracy over time. The Keys Cafe module will be available on Samsung devices running One UI 2.1 and above starting next month. In order to use the module, you’ll also need to update the Samsung Keyboard app to version 5.1.05.
Along with the upcoming Key Cafe module, Samsung has announced an update for the Home Up module. The update will introduce a new Share Manager feature that will let you hide specific share commands from the sharing window, manage your Direct Share contacts, and select the apps you want to see in the sharing window. The new Share Manager feature will soon be available on devices running One UI 2.5 and above.
Furthermore, Samsung is pushing an update for the MultiStar module with a new I ♡ Galaxy Tablet feature. The feature includes a Wireless Keyboard Share option that lets you connect your tablet’s physical keyboard with your Galaxy smartphone for a better typing experience. It also lets you quickly switch between the two devices at the tap of a button. This update has already started rolling out to Galaxy tablet users and it’s supported on tablets running One UI 2.5.
Inside an ordinary-looking home, a robot suspended from the ceiling slowly expands arms holding a sponge, before carefully wiping a kitchen surface clean. Nearby, another robot gently cleans a flat-screen television, causing it to wobble slightly.
The cleaning robots live inside a mock home located at the Toyota Research Institute in Los Altos, California. The institute’s researchers are testing a range of robot technologies designed to help finally realize the dream of a home robot.
After looking at homes in Japan, which were often small and cluttered, the researchers realized they needed a creative solution. “We thought, you know, how can we use the ceiling?” says Max Bajracharya, VP of Robotics at TRI.
In another Toyota lab, in Cambridge, Massachusetts, robots practice picking up cups and plates and loading the dishwasher. In order to operate reliably, and without damaging anything, the robots rely on artificial intelligence algorithms that learn how to grasp objects correctly. The robot also uses a soft gripper equipped with a sense of touch that TRI developed.
Toyota does not have a timeline for commercializing its prototypes, but it is looking to be an early entrant in a potentially big market.
The Japanese carmaker launched the Toyota Research Institute with a $1 billion investment in 2015, betting that its expertise in car manufacturing could be parlayed into more advanced robotics for homes and workplaces.
Gill Pratt, the institute’s CEO, says the sensing, computing, and actuation technology found in modern cars essentially makes them simple robots. “A car is just a robot that operates outside on roads,” he says. “The purpose of a car is to amplify a person, and we think of robots in the same way—as amplifiers of human activity.”
There is hope, especially in Japan, that robots will help care for an aging population in the coming decades. Besides keeping homes clean, robots might tend to people who need physical help, and perhaps even keep them company.
There is also an expectation that recent progress in AI will accelerate in advances in robotics. Today, most industrial robots are still fairly dumb, blindly executing repetitive motions. But this is starting to change, with new sensing and planning capabilities, as well as some use of machine learning.
The robotics industry is already worth hundreds of billions of dollars; significant advances could multiply the figures in coming decades. A growing number of startups hope to commercialize AI-powered robots for simple, repetitive tasks in fulfillment centers and retail stores. Big companies such as Google and Amazon are also investing in research that melds AI and robotics.
Having a robot perform useful tasks inside an ordinary home is, however, still a huge challenge because it involves adapting to complex and unpredictable environments. The ability to manipulate any everyday object remains an unsolved problem in robotics despite recent advances.
It isn’t clear how welcome robots will be in homes, as TRI’s researchers admit. But besides testing new hardware, TRI is conducting research on human-robot interaction. And earlier this year, it said it would build a simulated city near Mount Fuji in Japan, to test robotics and other technologies.
As TRI’s projects show, creating robots that could be of help around the home depends on AI as well as computer simulation and the cloud.
To teach robots how to perform a task, TRI researchers create a virtual reality environment where a person demonstrates for the robot by manipulating its arms. After many tries, the robot uses machine learning to determine the best movements. TRI’s robots also practice tasks like loading the dishwasher inside a simulated environment, which gives them many more attempts from which to learn.
“We're starting to make some progress on how we [get the robots to] actually learn the behaviors and then share that behavior across, say, a fleet of robots,” says Bajracharya.
More Great WIRED Stories
October 01, 2020 at 03:13AM
https://ift.tt/2ENP9aZ
These Robots Use AI to Learn How to Clean Your House - WIRED
“What in the name of Paypal and/or Palantir did you just say about me, you filthy degenerate? I’ll have you know I’m the Crown Prince of Silicon Valley, and I’ve been involved in numerous successful tech startups, and I have over $1B in liquid funds. I’ve used that money to promote heterodox positions on human enhancement, control political arenas, and am experimenting with mind uploading. I’m also trained in classical philosophy and was recently ranked the most influential libertarian in the world by Google. You are nothing to me but just another alternative future. I will wipe you out with a precision of simulation the likes of which has never been seen before, mark my words.”
That’s not the latest ill-advised Elon Musk tweet, nor is it one of his devoted fans roleplaying on Reddit. And it’s not quite Navy Seal copypasta—an over-the-top, comically written attack paragraph that parodies the voice of a “tough guy”—which spread, copied-and-pasted (that’s the “copypasta” part) around the internet.
Instead, it’s a parody of Navy Seal copypasta—notably, one that was written by a computer. Independent researcher and writer Gwern Branwen fed the language model GPT-3 a few examples of parodies of Navy Seal copypasta (such as minimalist—“I’m navy seal. I have 300 kills. You’re dead, kid”—or pirate—“What in Davy Jones’ locker did ye just bark at me, ye scurvy bilgerat … ”) and then asked it to use those examples to generate new parodies. (Branwen’s many experiments with GPT-3 can be found here.) For this parody, Branwen prompted GPT-3 with the input “Elon Musk and Peter Thiel.”
GPT-3 is the work of A.I. lab OpenAI, which describes its mission as “discovering and enacting the path to safe artificial general intelligence.” OpenAI has been the source of controversy, especially related to its decision to transition from a nonprofit to a for-profit corporation, which was followed by a $1 billion investment by Microsoft. (Microsoft now has the exclusive license to GPT-3.) OpenAI has been accused of fueling the A.I. hype cycle and was criticized for withholding the release of its previous language model, GPT-2, because it feared releasing the model would be too dangerous. Similarly, the recent release of GPT-3 (in a private beta) has sparked a lotofdiscussion. Some are heralding it as a leap forward in A.I., citing impressive examples of its abilities to generate code, answer medical queries, and solve language and syntax puzzles. Others are morewary, concerned about the potential for misuse or believing the hype is unfounded. Either way, it’s clear that sophisticated language models are making significant advances in their ability to generate convincing text. And in a world where social media platforms have disrupted the traditional gatekeepers to speech and reach (e.g., newspapers), convincing text-generating A.I. poses challenges to free speech and a free press. Namely, it could enable what sociologist Zeynep Tufekci calls “modern censorship”—information campaigns that harass, confuse, and sow mistrust with the goal of undermining individual agency and political action.
Online harassment is used to intimidate and punish people—often journalists and activists, disproportionately women and minorities—for their speech. Though much of the harassment online is the product of individuals, some is the result of organized campaigns. The Russian government pioneered the organized harassment campaign in the early 2000s, establishing a troll army that targets journalists, activists, and critics who threaten Russian interests.
Sophisticated language models could enable more effective automated harassment.
For example, a sophisticated language model could target harassment to specific speakers, making it more threatening and convincing. There have already been examples of GPT-3 creating mock obituaries that include accurate references to people’s past employers and current family members, which suggests it could be used to generate harassment that’s just as personal. Activists and journalists targeted by harassment often say they can tell the difference between “real” harassment and bot harassment, citing differentiators such as the frequency of posts and the coherence of the content. Models like GPT-3 could make it more difficult to tell the difference, making automated harassment more believable and thus more chilling.
In addition to targeted harassment, those looking to control public debate use a technique called “flooding” to drown out speech they object to and distort the information environment. Flooding involves producing a significant amount of content to distract, confuse, and discredit. Take the creation and dissemination of “fake news” in the United States: People both abroad and at home churn out stories that combine fact and fiction, undermining mainstream news organizations while distracting and confusing the public. By automating much of the writing process, sophisticated language models such as GPT-3 could significantly increase the effectiveness of flooding operations.
OpenAI’s paper about GPT-3 (currently a preprint) provides important evidence that supports this. The authors ran an experiment testing whether people could tell the difference between real news articles written by a human and articles written by GPT-3. They found that testers were barely able to distinguish between real and GPT-3-generated articles, averaging an accuracy of 52 percent—only slightly better than flipping a coin. This result has been borne out in the real world as well. It was recently revealed that a GPT-3-generated blog reached the No. 1 spot on the tech-focused news aggregator Hacker News. A student at the University of California, Berkeley set up the blog as an experiment to see whether people could tell that it was written by GPT-3. Tens of thousands of Hacker News readers didn’t suspect a thing, and the few who did were downvoted.
With A.I.-generated writing able to fool many readers, disinformation-as-a-service will become possible, eliminating the need for human-staffed “troll farms” and enabling organizations large and small to shape public debate with the low costs, high efficiency, and scalability of software. This has the potential to make flooding frictionless and pervasive.
Some people are skeptical of GPT-3’s eventual impact, commenting that it writes like a first-year student. This may be true, but have you read any misinformation? A first-year student could easily produce higher-quality misinformation than the status quo. GPT-3 doesn’t need to be writing a weekly column for the Atlantic to be effective. It just has to be able to not raise alarms among readers of less credentialed online content such as tweets, blogs, Facebook posts, and “fake news.” This type of content is a significant amount of what is created and shared online, and it is clear that it could be automated convincingly by GPT-3 and models like it.
Mitigating the harmful effects of sophisticated language models will require addressing information campaigns more generally. This means approaches that span the technical (textfake/bot detection and new socialmedia), social (model release norms and digital literacy), and political (antitrust and regulatory changes). GPT-3 doesn’t change the problem; it just further entrenches it. As we have seen, our institutions have largely failed in the face of the challenges posed by the internet. GPT-3 and language models like it will only make safeguarding healthy public discourse online more difficult—and more important.
As widely predicted, Google announced two smartphones during its Launch Night In event today: The Pixel 5 and Pixel 4a (5G). The Pixel 5 is the follow-up to last year’s Pixel 4, while the Pixel 4a (5G) is a 5G-compatible version of the Pixel 4a that launched in August.
Neither phone appears to introduce many AI-powered features that aren’t already available on existing Pixel devices. (Pixel hardware has historically been a showcase for Google’s AI innovations.) Instead, they seem aimed at nudging the lineup toward the midrange. Affordability is the focus rather than cutting-edge technology, along with the recognition that neither phone is likely to make a splash in a highly saturated market. Reportedly, Google plans to produce less than 1 million Pixel 5 smartphones this year; production could be as low as around 800,000 units for the 5G-capable Pixel 5.
The Pixel 5 might be a successor in name, but it’s arguably a downgrade from the Pixel 4 in that it swaps the Qualcomm Snapdragon 855 processor for the less-powerful Snapdragon 765G. The RAM capacity has been bumped from 6GB to 8GB, which could make tasks like app-switching faster. The Pixel 5 also has a 4,080mAh battery — the largest in any Pixel to date. Google claims it lasts up to 48 hours on a charge with Extreme Battery Saver, a mode that lets users choose which apps remain awake.
Speaking of the battery, the Pixel 5 introduces Battery Share, a reverse charging feature that can be used to wirelessly recharge Google’s Pixel Buds and other Qi-compatible devices. It’s akin to the Qi reverse wireless charging features found in Samsung’s Galaxy S10 and S20 series.
Above: The Pixel 5.
Image Credit: Google
The Pixel 5 retains the 90Hz-refresh-rate, 6-inch, 2,340×1,080 OLED display (19.5:9 aspect ratio) introduced with the Pixel 4, as well as the Pixel 4’s rear-facing 12.2-megapixel and 16-megapixel cameras. (The 16-megapixel camera might have an ultra-wide lens, rather than the Pixel 4’s telephoto lens.) As for the front-facing camera, it’s a single 8-megapixel wide-angle affair. There’s a fingerprint sensor on the rear of Pixel 5, harking back to the Pixel 3, and Google has ditched the Pixel 4’s gesture-sensing Soli radar in favor of a streamlined design.
Other Pixel 5 highlights include IP68-rated water- and dust-resistant casing, sub-6GHz 5G compatibility, and 18W USB-C charging and wireless charging. There’s also Hold for Me, a Google Assistant-powered feature that waits on hold for you and lets you know when someone’s on the line. (Currently, Hold for Me is only available in the U.S. in English for toll-free numbers, Google says.) Google’s night shooting mode, Night Sight, now works in portrait mode; Portrait Light illuminates portraits even when they’re backlit; and Cinematic Pan creates a “sweeping” video effect by stabilizing and slowing down motion.
The Pixel 4a (5G) is a tad less exciting, but it sports a larger display than the Pixel 4 (6.2 inches versus 5.8 inches). It also shares the Pixel 5’s 2,340×1,080 resolution, processor, and cameras alongside a headphone jack, but at the expense of other components. The Pixel 4a (5G) makes do with a 60Hz screen refresh rate, 6GB of RAM, a 3,885mAh battery, and Gorilla Glass 3 instead of the Pixel 5’s Gorilla Glass 6, with no IP rating for water or dust resistance.
The Pixel 4a (5G) will cost $499, according to Google — a $150 premium over the $349 Pixel 4a. It’s available in the U.S., Canada, U.K., Ireland, France, Germany, Japan, Taiwan, and Australia. The Pixel 5 costs around $699 in the U.S., U.K., Canada, Ireland, France, Germany, Japan, Taiwan, and Australia, which makes it far cheaper than the $799-and-up Pixel 4.
October 01, 2020 at 01:16AM
https://ift.tt/3jjUDcy
Pixel 5 fails to live up to Google’s AI showcase device - VentureBeat
Google's original Chromecast helped popularize streaming to TVs when it first came out in 2013. The $35 device allowed you to "cast" video from apps on your phone, like YouTube, Hulu and Netflix, to watch on a big TV. That's a common feature in TVs now, but subsequent iterations of the dongle have gained new capabilities like 4K streaming. On Wednesday Google gave the Chromecast its biggest upgrade yet. It's called Chromecast with Google TV, and I've had a few hours to play with it.
Available now for $50 (£60, AU$99), the new Chromecast is the first to include a physical remote, complete with Google Assistant for voice control. There's also an actual on-screen menu system, powered by Google TV, the company's new name for its Android TV operating system. Google TV is similar to its predecessor, with access to thousands of apps including Netflix, Disney Plus, Amazon Prime Video, HBO Max and Spotify (but not Apple TV Plus).
Chromecast with Google TV has all of the big features found on competing streamers. In addition to 4K HDR video, it also supports Dolby Vision and Dolby Atmos on compatible TVs and sound systems. The remote can control volume and power on a TV or soundbar, allowing me to (mostly) ditch my TV remote. I can press and hold on a button and speak into the mic for Google Assistant, which allows the digital helper to control your smart home, search for content, open apps, display the weather and perform other tasks.
Now playing:Watch this: Chromecast with Google TV wants to help you find what...
8:48
Hands-on: Quick setup but some initial quirks
In my brief time with the new Chromecast, so far it has performed admirably. Setup of both the new Chromecast and the remote functions on an LG OLED TV was a breeze. The Google Home app on an iPhone did most of the leg work -- and saved me from having to hunt and peck to type in login information using an on-screen keyboard.
Video quality was as good as I expected: Dolby Vision worked well on the OLED and other shows, including the Yankees-Indians game on YouTube TV, played clearly. Volume control on my Sonos Beam soundbar worked as expected, but it doesn't support Atmos so I wasn't able to test that feature yet.
A few other quirks and observations:
Asking Google Assistant to play Avengers: Endgame brought up Disney Plus, but asking to see Avengers: Infinity War took me straight to TBS (which has the cable rights) on YouTube TV. I didn't even have the option to choose Disney Plus, which allows for streaming in higher quality 4K with Dolby Vision.
Asking Google Assistant questions while watching YouTube TV not only pauses the action but the assistant's answer also covers a large portion of the top left of the display.
The input button on the bottom of the Chromecast's remote is great for switching your TV to Chromecast, but it doesn't work for switching the TV to other inputs (like a game console).
Opening up the Yankees-Indians game on YouTube TV took over 10 seconds from the "on now" tab until I was actually able to see Giancarlo Stanton face Shane Bieber on my TV.
The streamer comes in three colors -- snow white, sunrise pink and sky blue, the version I used -- and like prior models it's designed to tuck behind your television, out of sight. Power is also now delivered through USB-C as opposed to Micro-USB.
I wasn't able to test Google Stadia, the company's streaming video game platform, because it's not available on the Chromecast with Google TV at launch. Google says it will be adding support for Stadia "in the first half of 2021," and for now is recommending that those who want to play Stadia on a TV pick up the Chromecast Ultra, which remains on sale as part of Google's $100 Premiere Edition, bundled with a controller.
The regular Chromecast, which lacks a remote and the Google TV software and can only stream up to 1080p HD, remains available for $30 (£25, AU$59).
CNET TVs, Streaming and Audio
Get CNET's comprehensive coverage of home entertainment tech delivered to your inbox.
A change in vision
With the new Chromecast, Google seems to have reversed course on its original vision of the phone as the remote. You can still cast apps from your phone, tablet or computer with the new version, but the option to use a dedicated clicker makes the Chromecast with Google TV a potentially more appealing product. The upgrade comes at the perfect time: during a pandemic when everybody is streaming more than ever.
Priced at $50 with the remote, the new Chromecast with Google TV is poised to take on two of CNET's favorite 4K streaming devices, the Roku Streaming Stick Plus and the Amazon Fire TV Stick 4K. Roku and Amazon currently dominate the smart TV and streaming device landscape, with both platforms each having over 40 million users.
At first blush, the new Chromecast has everything it needs to usher some of those users into Google's camp -- including an advantage in app support. Google TV also offers one major new app, HBO Max, that's currently missing from rivals Roku and Fire TV. There's also an app for Peacock, another new streaming service that Fire TV lacks. On the other hand, both Roku and Fire TV have the Apple TV app and access to Apple TV Plus, while Google TV does not.
I'll be testing out the new Chromecast against the competition over the next couple of days. Look for a full review on CNET soon.
October 01, 2020 at 02:12AM
https://ift.tt/2SaQr2K
Chromecast with Google TV is the streaming device's biggest upgrade yet: Hands-on - CNET
The upcoming "iPhone 12 Pro Max" is anticipated to have a number of unique high-end features not found on any other iPhone, such as its screen size, LiDAR scanner, faster 5G, and potentially a higher display refresh rate.
Leaks have suggested that the iPhone 12 Pro Max's 6.7-inch display could support a 120Hz refresh rate, but doubt has been cast on the feature arriving in the final product. Supposed PVT (production validation test) models indicated that the iPhone 12 Pro Max will be able to "alter the refresh rate from 120Hz to 60Hz in accordance to the content displayed on-screen," and have a resolution of 2788 x 1284 resolution with 458 pixels per inch. This model is also speculated to have Y-OCTA support, 10-bit color, and be XDR capable.
Other rumors have said that the device will not have a 120Hz display after a series of production setbacks. Instead, the feature could arrive in 2021 alongside the adoption of low-power LTPO displays and a variable refresh rate that preserves battery life. Adding a ProMotion display to the iPhone without the LTPO technology may result in an undesirable battery drain, although it is worth noting that the iPad Pro uses ProMotion without LTPO.
Supply-chain analyst Ming Chi-Kuo has said that iPhone 12 models will not support a 120Hz refresh rate due to battery life considerations, and could alternately arrive next year with LTPO.
Moreover, the iPhone 12 Pro Max is widely believed to have a LiDAR scanner, which has otherwise only appeared on the 2020 iPad Pro, for enhanced augmented reality and photography experiences. The same alleged PVT models used the LiDAR scanner for "assisted autofocus and subject detection for Video and Night Mode." Some rumors have proposed that LiDAR will be exclusive to the iPhone 12 Pro Max since it is a high-end feature.
Rumors intimate that the iPhone 12 Pro Max will also be the only model to have faster mmWave 5G. Sub-6GHz 5G, which is the slower but more widespread form of 5G, is expected to be present on other new iPhone models. Only the largest iPhone in the lineup supposedly has the internal space to accommodate the antenna design needed to implement mmWave at this time, along with the larger battery needed for its higher power usage.
For more on what to expect from the 2020 iPhone models, be sure to see our dedicated roundup.
September 30, 2020 at 04:01PM
https://ift.tt/3cHEfQC
iPhone 12 'Pro Max' Model to Sport Unique High-End Features - Mac Rumors
During a virtual event today, Google unveiled two new phones, the premium Pixel 5 and the more affordable Pixel 4A 5G. The Pixel 5 serves as the successor to last year's Pixel 4 and brings new features including 5G connectivity, an ultra-wide camera and a much larger battery. The phone costs $699 (£599, AU$999) and will be available in the US from retailers on Oct. 29 with preorders starting today.
As Google's marquee handset, the Pixel 5 represents the tech giant's latest offerings in Android 11 software, like a new Hold For Me calling feature, and top-of-the-line hardware. But the device faces stiff competition from several phones, including the popular iPhone 11 and the newly announced Galaxy S20 FE. Devices from smaller companies, like the OnePlus 8, aim to cut the Pixel 5's share of potential Android users as well.
Now playing:Watch this: Google unveils Pixel 5 and Pixel 4A 5G
In a little over 12 hours from now, Google will announce the Pixel 4a 5G and Pixel 5 at its Launch Night In event. Thanks to numerous leaks, we pretty much know everything there is to know about the two phones, including the fact that they’ll both launch with the latest version of Android, Android 11, out of the box. An update to Android 11 was released for existing Pixel devices on September 8th, and we’ve seen numerous OEMs release beta updates for their own devices. But so far, no one has launched a new smartphone with the latest version out of the box. We all thought that Google would be the first one to do so, but Chinese smartphone brand Vivo has beaten them to the punch with the launch of the new Vivo V20.
In a press release, Vivo says its new V20 series “will run on the recently-launched Android 11 operating system,” which sure sounds like the 3 devices will run on the latest OS version. However, the press release also mentions that the company “has a plan to equip V20 series with Android 11 in succession” depending “on market conditions.” That’s rather vague, and left us to wonder if the Vivo V20 SE, V20, and V20 Pro really will run Android 11. It wouldn’t be the first time we’ve seen companies mention the latest OS version as a marketing tactic—HMD Global says its new Nokia 2.4 and Nokia 3.4 are “Android 11-ready,” which just means they’ll receive the actual Android 11 update whenever the company is ready to roll it out. Interestingly, the product pages for the Vivo V20 SE and V20 Pro state the two devices are running Funtouch OS 11 based on Android 10, while the product page for the Vivo V20 says it’s running Funtouch OS 11 based on Android 11.
Thankfully, it’s easy for us to confirm that the regular Vivo V20 is, in fact, running the latest Android version. The phone has already reached the hands of Indonesian YouTubers like GadgetIn, who published an unboxing of the new phone. In the video, he goes through the settings menu and shows that the device is running Android Version 11 and has the new Easter egg.
Vivo V20 running Funtouch OS 11 based on Android 11. Source: GadgetIn.
His video also confirms that the device’s model name is “V2025.” I cross-referenced that with the Google Play Console’s Device Catalog and confirmed that the device’s SDK level is indeed listed as “30” for Android 11. In fact, the Vivo V20 is the only device on the catalog to have its SDK version listed as “30” and nothing else, confirming it’s the first phone to launch with the latest Android version. All the other phones on the catalog that support SDK level 30 either upgraded to Android 11 (like the current crop of Pixel phones and the ADT-3) or are currently in beta (like the OnePlus, OPPO, Realme, and Xiaomi phones that are listed).
As for why this matters, well, there are two reasons. First, the fact that an OEM managed to beat Google to launching a phone with Android 11 is impressive considering how slow most OEMs usually are at releasing major OS updates. Second, the “shipping API level” of a phone is actually quite important to note, because it factors into how many OS updates the manufacturer is expected to deliver. When most OEMs promise “2 years” of OS upgrades, they really mean “2 letter upgrades” or “2 generations of updates.”Thus, a phone launching with Android 10 can expect to get an update to Android 11 and Android 12, while a phone launching with Android 11 can expect to get an update to Android 12 and Android 13. We’re not sure how long Vivo plans on supporting the new Vivo V20, but it’s definitely promising that the phone has launched with the latest OS version.
XDA » News Brief » Vivo beats Google to launching the first phone with Android 11
September 30, 2020 at 11:22AM
https://ift.tt/3n5G2n5
Vivo beats Google to launching the first phone with Android 11 - XDA Developers
Apple’s mysterious launch strategy for the iPhone 12 range has attracted a lot of attention, but the company has now accidentally revealed a new release detail which will receive a mixed reaction.
Spotted by MacRumors, Apple has included wording in its new iOS 14.2 beta which all but confirms it will not bundle EarPods with its upcoming iPhones. Given the popularity of AirPods, some will applaud Apple for the environmental benefits of not including earphones which millions of users will leave in the box. On the flipside, Apple is also tipped to remove the wall charger while increasing prices.
Recommended For You
Breaking down its discovery, MacRumors points out that “In iOS 14 and earlier versions of iOS, there's a mention of reducing exposure to RF energy by using the ‘supplied headphones’, which is the same wording that Apple has used for years now.” But in iOS 14.2, the word “supplied” has been removed.
While subtle, this deliberate change is impossible to ignore after months of leaks. Furthermore, while the environmental benefits are indisputable, removing both EarPods and a wall charger in a year when Apple will also announce a further expansion of the AirPods range and a new magnetic wireless charging accessory may be deemed opportunistic.
Of course, the real villain here is 5G with modem prices still so expensive that only the iPhone 12 Pro Max is expected to receive the top specification. With 5G coverage also limited in most parts of the world, and iterative design and performance upgrades coming to the iPhone 12 range, it may not be the most compelling year to upgrade.
That said, it’s almost impossible to escape the excitement that builds around a new iPhone launch and that date is fast approaching.