Sony is gearing up for its first big camera launch of the year – and its teaser invite for the announcement suggests we'll soon see a powerful new full-frame camera for YouTubers.
The launch will take place on March 29 at 8am PT / 3pm BST, which is 1am AEST for those in Australia, and Sony is promising "the ultimate content creation experience".
So what can we expect? The style of the teaser poster matches previous ones we've seen for previous ZV-series cameras, like the Sony ZV-1F and Sony ZV-E10. Combine that with the artfully backlit microphone windshield, and we're almost certainly looking at a new model in that video-focused series.
Beyond that, the teaser doesn't give much else away. But the size of the windshield above the camera's silhouette suggests it's either going to be an APS-C or full-frame camera – and the latest rumors are pointing towards the latter.
The usually reliable Sony Alpha Rumors is speculating that the camera will be called the Sony ZV-E1, and will be based on one of the world's best video cameras, the Sony A7S III. If so, that would mean a full-frame sensor and some powerful video features, like uncropped 4K/60p and 4K/120p video (if it matches the A7S III).
That would make the new camera similar to the Sony A7C, a tiny full-frame camera that's great for travel shooters. But, as we noted in our Sony A7C review, that camera also has limited video specs, topping out at 8-bit 4K video at up to 30fps.
In theory, that's the gap Sony's new vlogging camera could fill – and if its price tag is significantly lower than the launch price of the A7S III ($3,499 / £3,800 / AU$5,999), it could well become a popular choice among video creators.
Analysis: Sony's focus returns to video
Like Sony's upcoming new camera, the Sony ZV-E10 (above) is designed for vloggers thanks to its size and autofocus powers(Image credit: Sony)
But that model is still likely on the way, with rumors predicting a launch in the middle of this year. A video-focused camera instead confirms Sony's commitment to dominating the market for YouTubers who need a compact workhorse that raises their production values above their growing competition.
Full-frame cameras are ideal for anyone looking to shoot videos or photos with a shallow depth of field, while maintaining image quality in low light. And Sony cameras are particularly renowned for their autofocus powers, which helps solo shooters to keep themselves in focus without the need for an extra person behind the camera.
Interestingly, Sony Alpha Rumors says that the rumored ZV-E1 could have "lots of new AI features", which suggests that its autofocus could take another leap forward. So if you're a one-person video team who's looking to upgrade their camera setup, it should be well worth tuning into the launch livestream on March 29.
If the new camera lives up to Sony's hype, and the latest rumors, we could be looking at a new challenger for the top spot in our guide to the best YouTube cameras.
from TechRadar: Photography & video capture news https://ift.tt/xt3O4n5
via IFTTT
This is a sad day... It’s hard to convey the sense of shock and dismay I felt when a message from a colleague appeared this morning, telling me that DPReview was closing. I’m still trying to process it. IR and DPR grew up together as the industry matured from the days when a megapixel was the norm and two was something to shout about. (I launched IR in April of 1998, and Phil Askey launched DPR in December the same year.) As both sites grew, the other was always there, and I myself have and continue to use the DPR site as... (read more)
from Imaging Resource News Page https://ift.tt/VumzTGK
via IFTTT
DPReview.com has announced its closure after 25 years of operation. Digital Photography Review has been owned by Amazon since 2007. While the website has operated mostly independent from its parent company in the subsequent years, DPReview has unfortunately been caught up in the latest Amazon restructuring, which has resulted in nearly 30,000 jobs being eliminated so far this year. DPReview has long been a major player in the photo industry and an incredible resource... (read more)
from Imaging Resource News Page https://ift.tt/coVl0PI
via IFTTT
Adobe has revealed its answer to AI art generators like Midjourney, Dall-E, and Stable Diffusion – and the new family of generative AI tools, collectively called Adobe Firefly, could ultimately be as influential as the original Photoshop was in 1990.
The giant behind apps like Photoshop and Illustrator has been baking AI image generation into its software for years, but Adobe Firefly takes it to a whole new level. Its first Firefly beta brings text-to-image generation to Photoshop and gives you the ability to apply styles to text in Illustrator, among other skills.
A key difference from the likes of Midjourney and Dall-E is that Adobe Firefly is more open about the data its AI models have been trained on. Adobe says this first beta model has been trained on Adobe Stock images, openly-licensed content, and public domain content where the copyright has expired.
In theory, this makes it a more ethical alternative to rivals that have attracted class-action lawsuits from artists who claim that some AI models, including Midjourney and Stability AI, are illegally based on copyrighted artworks. While this is an understandable policy from a giant as big as Adobe, it isn't yet clear what effect this will have on Firefly's overall power and versatility.
Adobe is treading carefully in this space, with a Firefly beta sign-up now open. Signing up won't necessarily grant you access to the new tools, though, as Adobe says that the beta process will be used to "engage with the creative community and customers as it evolves this transformational technology". But the good news for amateurs is that it will be asking "creators of all skills levels" to contribute.
While it might be a while until we see Adobe Firefly's new AI models rolled out across its full range of Creative Cloud apps, the early demos show that some fascinating, powerful tools are coming soon. In general, Firefly takes the usability and creative potential of its apps to new heights, thanks to the ability to simply describe an image, style, or text effect you're looking for.
The first apps that'll benefit from Firefly beta are Adobe Photoshop, Adobe Illustrator, Adobe Express, and Adobe Experience Manager. And Adobe says this Firefly beta is just the first AI model in a family that is in the pipeline, with all of them likely to be integrated into Creative Cloud and Express workflows.
So what exactly is Adobe Firefly right now and how does it compare to the best AI art generators? We've gathered everything you need to know about Adobe's AI milestone in this guide, which you can navigate using the shortcuts on the left.
Adobe Firefly: how to sign up and release date
You can apply to be an Adobe Firefly beta tester right now. It isn't yet clear how many people will be granted beta access, but Adobe will use the process to fine-tune its models before fully integrating them into applications.
Adobe also hasn't yet revealed how long the beta process will be, but says that it'll use the period to "engage with the creative community and customers as it evolves this transformational technology". The speed of its full rollout will likely depend on how successful this beta period proves to be.
Adobe Firefly: which apps is it for?
The first Adobe apps to benefit from Firefly integration will be Adobe Photoshop, Adobe Illustrator, Adobe Express, and Adobe Experience Manager. These will get new tools like text-to-image generation, AI-generated text effects, and more, which you can see in action below.
(Image credit: Adobe)
But AI tools are coming soon to other apps, too. For example, Adobe previewed a feature in Premiere Pro that'll let you change the season and weather of a video scene, simply by writing the request in a text box.
Video editing is about to get a lot more powerful and user-friendly, although it isn't yet clear how quickly Adobe plans to roll out the betas for this next wave of Firefly tools.
Adobe Firefly: how do you use it?
We haven't yet been able to use Adobe Firefly's new tools, but we have seen them in action. And if they work as well as the early demos, they could have a dramatic impact on how Adobe's apps work – and who uses them.
The most obvious parallel to AI art generators like Midjourney, Dall-E, and Stable Diffusion is the text-to-image user interface. Like its rivals, the Firefly beta will let you type a request into a box (for example, "side profile face and ocean double exposure portrait") and it'll produce an AI-generated example.
You'll also be able to apply different styles using a menu that lets you choose between, for example, styling the image as a photo, graphic, or piece of art format. And there'll be further tweaks possible from a menu that has options like 'techniques', 'materials', and 'themes'.
(Image credit: Adobe)
It'll be a similar story with new AI text effects in the likes of Illustrator. For example, you'll be able to type a specific prompt like 'many fireflies in the night, bokeh light' and the AI generator will cook up a font matching that particular description. The possibilities for marketing, social media, and more are huge, particularly for those with no background in digital art.
Looking further ahead, Illustrator will be able to take sketched fonts and turn them into digital reality, while Adobe Express will let you generate social media templates from simple prompts like 'make templates from mood board.
Adobe Firefly vs Midjourney vs Dall-E
It's a bit too early to make any conclusions about how well Adobe's new Firefly tools work compared to the likes of Midjourney and Dall-E, but one area where it does differ is in the AI model's training.
Adobe says this first Firefly model is trained on "Adobe Stock images, openly-licensed content, and public domain content where copyright has expired", which means it'll be available for commercial use without the potential threat of copyright issues. Other AI art generators, meanwhile, are embroiled in potential class-action lawsuits from creators who argue that they involve “the illegal use of copyrighted works.”
(Image credit: Adobe)
Interestingly, Adobe also says that it's planning to "enable creators that contribute content for training to benefit from the revenue Firefly generates from images generated from the Adobe Stock dataset". Exactly how Adobe plans to do this hasn't yet been decided, though, so while the intent is laudable, we're interested to hear more details. The company says it "will share details on how contributors are compensated once Firefly is out of beta".
Similarly, Adobe says that one of the broader goals of its Content Authenticity Initiative (an initiative that includes members like Getty and Microsoft) is the creation of a universal 'Do Not Train' content credentials tag for images, which would allow artists to exclude their creations from being part of an AI image generators training. Again, while this is a promising development, it's currently only at the "goal" stage.
Adobe Firefly: how good is it?
Adobe Firefly is clearly a huge moment for its creative apps – and anyone who relies on digital tools like Photoshop, Illustrator, or Express.
While many people already use AI-powered Adobe tools (like Photoshop's Neural filters), Firefly could open them up to a whole new audience – all you need to do is describe anything from images to illustrations and video and the software 'co-pilot' (as Adobe likes to call its AI) will give you a helping hand.
Of course, none of this is new, and the likes of Midjourney and Stable Diffusion have stolen a march on Adobe in getting AI art generators out into the wild. It also remains to be seen how much Adobe's understandable attempts to make Firefly ethical (by restricting its training data) will impact its overall usefulness and versatility.
The likelihood is that Firefly will simply give existing Adobe customers some very useful new tools to dream up new creations, rather than attracting hordes of converts over from the likes of Midjourney. But we'll give you our first thoughts once we've taken the Firefly beta for a spin, hopefully very soon.
from TechRadar: Photography & video capture news https://ift.tt/xq4BowO
via IFTTT
In a pair of interviews (1, 2) with Nikkei Business, Sony Vice Chairman Shigeki Ishizuka discussed the early days of Sony's camera business, including Sony's A-mount Alpha DSLR cameras, a potential move to Micro Four Thirds, and the eventual decision to make full-frame mirrorless cameras. The interviews are published in Japanese, so as always, there's the potential for something to be lost in translation or misconstrued. That said, we saw some interesting highlights on Sony Alpha Rumors. Reading the full interviews, it's... (read more)
from Imaging Resource News Page https://ift.tt/lW9YdUo
via IFTTT
Move over, James Webb Space Telescope, the venerable Hubble Space Telescope still has much to offer. NASA has announced that Hubble recently spotted an irregular galaxy that showcases an impressive, imposing star-forming region. As seen on Space.com, the new Hubble photo captures NGC 5486, a roughly spiral galaxy located within the constellation Ursa Major. NGC 5486 is about 110 million light-years from Earth. NGC 5486, as seen by Hubble. Image credit: ESA/Hubble &... (read more)
from Imaging Resource News Page https://ift.tt/2ZhSEGs
via IFTTT
The James Webb Space Telescope observed a rare Wolf-Rayet star last June, although NASA announced it this week. The star, WR 124, is showcased in unprecedented detail thanks to Webb's sophisticated imaging instruments. WR 124 is about 15,000 light-years away in the constellation Sagittarius. WR 124 is massive – about 30 times more massive than the Sun. Wolf-Rayet, or WR, stars are rare heterogeneous sets of stars. Massive stars like WR 124 "race through their lifecycles," and some, like WR 124, go through a brief Wolf-Rayet phase... (read more)
from Imaging Resource News Page https://ift.tt/LMfHiq6
via IFTTT