hursday, OpenAI launched a new ChatGPT interface to interact with. The “Canvas” product opens in an individual window near the normal chat box that allows
users with the power of artificial intelligence to generate writing and AI coding projects directly in the canvas. After that, users can highlight sections from the work to be edited by the model.
The new “Canvas” interface is coming first in a beta version to ChatGPT Plus and Teams and next week will be available for Enterprise and Edu users, but the company plans to make this feature free once it’s out of the beta version.
Canvas can help those users who are using ChatGPT every day to generate writing and AI code because the ChatGPT interface is really easy to use and it works well for various tasks with a limit for those projects that require editing and changes. But with the new AI addition, ChatGPT will be able to have a better understanding of the tasks that users try to accomplish. And for this, every user will have the ability to highlight certain parts to indicate what ChatGPT should focus on.
It’s important to know that users control the project in Canvas and they can directly edit or code. There’s also a menu with shortcuts that can ask ChatGPT different questions such as to adjust the writing length, debug a code, and other tasks to experience the enhanced developer AI potential of this tool.
For now, the chatbots that are powered by artificial intelligence are not able to finish a larger project with just a single prompt but usually, they can create a perfect starting point. These workspaces that are editable such as Canvas allow its users to adjust some wrong parts from an artificial intelligence chatbot output without the need to inspect the prompt and generate a new stretch of AI code.
According to TechCrunch , the OpenAI product manager Daniel Levine said in a demo “This is just a more natural interface for collaborating with ChatGPT” .
In his demo, the product manager has to select the “GPT-4o with canvas” from the ChatGPT model picker. But OpenAI guarantees that Canvas windows will pop out automatically in the case when ChatGPT detects that an individual workspace can be helpful, but users also can write the sentence “use Canvas” to project earlier their window.
Subscribe to our newsletter Google Lens Expands Its App Capabilities To Answer Questions From Videos (Image Credits: Goolge Website)
Google announced that will upgrade its visual search on the Google Lens app, which will have the capability to answer almost instantly questions about the users’ surroundings. All of the iOS and Android users who are English fluent can capture a video with the Lens feature and then ask questions about various objects from the video.
What is Google Lens ? Google Lens represents a group of vision-based capabilities of computing and understanding what users are looking at and provides information to identify plants and animals, explore different types of locals or menus, and also find visually similar images.
According to the director of product management Lou Wang, this Google Lens new feature uses a Gemini model that is tailored to understand the entire video and give pertinent information. Machine learning plays a crucial role in this feature, as Gemini, part of Google’s AI family, powers many products in Google's portfolio.
“Let’s say you want to learn more about some interesting fish,”. “[Lens will] produce an overview that explains why they’re swimming in a circle, along with more resources and helpful information.” , the director of product management, Lou Wang said in a press meeting.
To have access to the Google Lens new feature, users must sign up for the Google Search Labs program and also opt for the “AI overviews and more” trial features in Labs. After that in the Google app, all you need to do is to hold the shutter button from your phone to activate the video-capturing mode.
For example, ask a question during the video recording, and Lens almost instantly Lens will provide an answer powered by AI Overviews, which relies on machine learning to collect and summarize information from the web. Also, Lou Wang states that the Google Lens app is using artificial intelligence to identify those frames that are interesting and important from a video and also relevant to the initial question.
Apart from the video analysis feature, the Google Lens app is able now to search with text and images all at once. English-fluent users, including those not registered in Labs, can use the Google app to take a photo by holding the shutter button and asking questions about it. So, since yesterday, when the Google Lens app on both Android and iOS recognizes a product will be able to provide information about it including the price, brand, stocks, and reviews.
“Let’s say you saw a backpack, and you like it,” the director of product management said.
“You can use Lens to identify that product and you’ll be able to instantly see details you might be wondering about.”
There’s also an ad element for this feature and the result page will show some shopping ads that are relevant to your search. This ad component it’s important because about 4 billion searches on Google Lens are related to shopping.
The New Oura Ring 4 - Sleeker Design, Enhanced Features And Greater Capabilities (Image Credits: Oura Website)
Recently there has been a major interest in various smart health devices that are very accurate and also without a screen because they let users still wear a watch. So, the smart device ring market has included a lot of companies that are trying to be popular in this growing market.
Despite all of these competitors, Oura managed to remain very accurate and useful and unlike many other health trackers, the Oura Ring stands out by offering features that aren't limited to just smartphone usage. So, this year Oura is giving their popular smart device ring a transformation and will introduce the Oura Ring 4 starting from $349. The new and improved smart rings will give a slimmer design, enhanced accuracy, and even more sizes to fit everyone.
Also, with the fresh launch of the Oura Ring 4 today, the company states that also their app will suffer a complete redesign with new features and a better organization.
In the first place, the new Oura Ring doesn’t look so different from the previous model, the Oura Ring 3 , but the new version will be made entirely from titanium, apart from the previous epoxy interior. All of the ring options will be fully round similar to the current Horizon models because the sensors will no longer have that prominent cover and they will be replaced with a more comfortable and flatter profile.
Despite the changed design, the major update in the new version of the Oura ring 4 represents the software where the new algorithm is helping to solve an important problem that almost every smart ring has - every finger is different, and smart rings have the chance to move during the day. This problem can hurt the overall data because if it’s not in an optimal place, the rings can have gaps in data.
To solve this major problem, Oura stated that the new algorithm will increase the number of signal trails from eight to 18 to collect the best signal from every position.
According to an external study , the new algorithm from Oura is providing a 120% increase in signal quality and also a 30% increase in accuracy for the blog oxygen tracking feature. In addition to these, Oura claims this ring will be a better sleep tracker and the improvement can reduce the gaps between the daytime and nighttime accuracy in heart rate by 7% respectively 31%. Even the battery life will get an improvement of up to 8 days according to the OURA ring sizing because the larger sizes are supposed to have better battery life.
The Oura Ring 4 will be shipped starting October 15 and the good news is that their monthly subscription will not suffer increases.
Ford Expedition gains Android Automotive and half of Lincoln’s Panoramic Display (Image Credits: Ford Website)
Ford will launch a new and redesigned version of their largest SUV which now includes their brand new Digital Experience infotainment system. The version of 2025 becomes the second Ford that gets the Android Automotive-based after the Explorer.
Instead, the dashboard along with the user interface will have a design similar to a chopped-down version of the Lincoln panoramic screen. The new Ford Expedition is first presented accompanied by a drone light show in Texas today, and it seems to have a 23-inch panel despite the Lincoln panoramic screen. But the exciting news is that drivers are getting similar instrument clusters along with navigation right in front of their seats. Also, the smaller center touch has Google Play Store apps and can operate on Apple CarPLay with HVAC controls. In addition to these, the new Ford Expedition is offering Google Assistant.
The panoramic screens have become recently popular in the automotive industry because the manufacturers can implement them in their most luxurious cars. Usually, these represent a “safer” way for the driver to watch streaming videos and in some cases even play some PlayStation games.
The Ford company is offering a standard version of the Expedition and an extended “Max” one. For the center row of the vehicle, drivers can choose from bench seating or captain chairs while the first-row headrest will have clips to hold smartphones and tablets for those in the center row. The 2025 Ford Expedition may have the space for up to eight people with the bench version.
At first sight, the new Ford Expedition seems very similar to the new Lincoln Navigator, but with a split rear gate, a light bar up front, and also a similar 3.5 liter V6 engine. Also, a new Tremor trim added to the 2025 Ford Expedition will boost its power to 440 horsepower with tuned suspension and special trail modes.