Category: 4. Technology

  • Tony Hawk On Tony Hawk’s Pro Skater 3 + 4 Release: ‘The Nostalgia Is Obvious’

    Tony Hawk On Tony Hawk’s Pro Skater 3 + 4 Release: ‘The Nostalgia Is Obvious’

    For everyone who is looking to relive their childhood, Tony Hawk’s Pro Skater 3 + 4 will be re-released.

    The legendary games will be released on major consoles on July 11, marking the second time that Hawk’s games have been re-released in this current generation. Tony Hawk’s Pro Skater 1 + 2 was re-released in 2020 and 2021 on the current generation of consoles.

    “The nostalgia is obvious, and with our game coming out, the nostalgia for that is very strong,” says Hawk in a one-on-one interview.

    Tony Hawk’s Pro Skater is one of the most iconic video game franchises of all time. The first game was released in 1999 at a time when extreme sports was just starting to become a mainstream genre. The game’s release came just months after Hawk landed the first-ever 900 at X Games V.

    The original series generated over $1.4 billion in sales, with Tony Hawk’s Pro Skater 2 being the best-selling game of the franchise with over two million copies in the United States alone.

    The new edition will feature all of the notable skaters from the original games — Bucky Lasek, Bob Burnquist, Chad Muska and Hawk himself — along with multiple new current skaters. The game will also feature skateboard legend Bam Margera, something that Hawk had to push tremendously for. Margera will be included as a secret skater in the re-release of the game.

    While the game is very true to the original versions — Tony Hawk’s Pro Skater 3 and 4 were released in 2001 and 2002 — Hawk mentions that the re-release will feature new added elements, including new maps and levels.

    “Based on the success of our last remaster (1+2), I think it’ll be great,” says Hawk of the game’s re-release. “This one is more of a remake. We have some of the iconic levels from three and four, I think the ones that people most identify with. We also added some new maps and new levels, which I wanted to do.”

    There will be several new skaters added to the original lineup, including Chloe Covell — youngest women’s street gold medalist in X Games history — Rayssa Neal, Yuto Horigome, Jamie Foy and Zion Wright. There’s a number of new international skaters in the game, which represents the growing popularity of extreme sports across the world.

    Foy is considered the “best street skater” these days, according to Hawk. Meanwhile, Hawk calls the 26-year-old Wright a “machine.” The game will also feature new songs, with Hawk saying he didn’t want to repeat the old soundtrack. He also cites wanting to bring songs that resonates with the current generation.

    “I’m excited to venture into new territory and give people a chance to skate and do combos in different areas in new areas,” says Hawk. “We’ve updated the skaters so they reflect the current roster of the people you see either competing or in Thrasher Magazine, while still honoring the original characters.”

    Skateboarding debuted at the Olympics in 2020 and Hawk points towards that as a major reason for the international growth of the sport.

    “With the Olympics inclusion, that has helped to open eyes to skating in other countries that maybe hadn’t embraced it or considered it before,” says Hawk. “I feel like the international growth is even bigger. If you look at the top competitors now, so many are from Japan, from Australia, from Brazil, and that element is growing.”

    A total of 80 skateboarders participated in the 2020 Summer Olympics from 25 different countries. Japan won the most medals (five, three gold) with Brazil pulling in three total medals. Tony Hawk’s Pro Skater 3 + 4 features Horigome (who won gold in the men’s street competition) and Brazilian skater Rayssa Leal, who won silver in the women’s street competition.

    “Obviously, the United States is still doing well, but I feel like it has become much more international,” says Hawk. “I’m thankful for that, because it gives kids a chance to try it wherever they are.”

    Tony Hawk Partnering With Tony The Tiger For Tony Hawk’s Pro Skater 3 + 4

    In collaboration with the re-release of Tony Hawk’s Pro Skater 3 + 4, Hawk is partnering with an old friend of his — none other than popular cereal character Tony The Tiger. For those that aren’t familiar, Hawk was Tony The Tiger’s stunt double in 1990 for a commercial.

    “I’ve been a fan of Kellogg’s Frosted Flakes since I could choose my own breakfast,” says Hawk. “But I got to work on a commercial in 1990 where I was the stunt double for Tony The Tiger. It’s the early days of animating things over actual video. It was a little archaic, but I think it looked great. I had to wear a skin tight suit.”

    While Hawk is a mega star and has been over the past quarter century, that wasn’t the case in the early 90’s. This preceded the debut of the X Games and Hawk hitting the innovative 900 move. It also preceded the release of his popular video game series by nearly a decade.

    The 57-year-old says he was just thankful to get work at that time, considering vert skaters didn’t receive much recognition in the early 90’s.

    “My friend Chris Miller was the main character in the commercial, so I ended up filming him as well,” Hawk details. “I was the cameraman, and I was Tony The Tiger in that commercial. At the time, there weren’t a lot of opportunities for skateboarders, especially vertical half-pipe skateboarders, so I was thankful to get a job, to be honest.”

    The two Tony’s will be teaming up again at the Vert Alert Legends Demo in Salt Lake City, Utah.

    “Here we are 35 years later, and we’ve incorporated Tony the Tiger into the game with some of the merch,” says Hawk. “I got to hang out with him on my ramp a couple weeks ago, and he is coming to our big event in Salt Lake City, the Vert Alert on July 18 and 19th. He’ll actually be there in person. It’s beyond any dream I would have had, because I didn’t imagine I’d even get to be a pro skater into my old age.”

    Limited edition skateboards will be released to five fans, including autographed merchandise at the Vert Alert Legends Demo.

    “To bring those two elements together for me is an amazing collaboration,” says Hawk of teaming up with Tony The Tiger again. “We’ll be doing some giveaways with some prize packs, Tony the Tiger skateboards, limited edition Frosted Flakes boxes. I’m doing some social media stuff with Tony. It’s a meeting of the Tonys, which has been pretty cool. We actually created our own handshake.”

    Continue Reading

  • Tamron Completes Its Second Gen Trinity with the 16-30mm f/2.8 G2

    Tamron Completes Its Second Gen Trinity with the 16-30mm f/2.8 G2

    Tamron has announced the 16-30mm f/2.8 Di III VXD G2 zoom lens for both Sony E-mount and Nikon Z-mount, completing its second generation trinity — the “Daisangen” as Tamron calls it — as the new lens joins the 28-75mm f/2.8 G2 and 70-180mm f/2.8 G2 telephoto zooms.

    Daisangen is a term that originated from the game of mahjong, Tamron explains, and refers to a winning hand made by collecting three sets of dragon tiles.

    Three black Tamron camera lenses of varying sizes are displayed upright on an orange surface with a softly blurred background.

    “Drawing from this concept, the photography industry uses the term “daisangen lenses” in Japan to describe a set of three zoom lenses—a wide-angle, a standard, and a telephoto—all featuring a constant f/2.8 aperture throughout their zoom range,” Tamron says.

    This third lens in its trifecta of G2 optics builds upon what Tamron calls the success of the “highly acclaimed” 17-28mm f/2.8 Di III RXD (Model A046). The 16-30mm f/2.8 Di III VXD G2 expands the zoom range but maintains a compact form factor and lightweight design along with maintaining the company’s promise of exceptional image quality. Tamron also says that the lens features improved autofocus performance which contributes to overall better operability.

    A person holding a black Sony digital camera with a large wide-angle lens, standing against a blurred blue background.

    Tamron also says that from a design perspective, it improved the ergonomic design of the body and made the lens with an “enhanced” exterior surface. The lens also comes with the promise of exceptional optical performance along with beautiful bokeh.

    The Tamron 16-30mm f/2.8 Di III VXD G2 features a construction of 16 elements arranged into 12 groups, although the company does not note the inclusion or number of any special glass elements or coatings in that formula. It has an aperture range of f/2.8 through f/16 — an unusually tight aperture range, especially for a zoom lens — via a nine-bladed diaphragm. The lens has a minimum object distance of 7.5 inches at the wide end and 11.8 inches at the telephoto end. It measures four inches long on Sony E-mount and is a slightly longer 4.1 inches on Nikon Z-mount. Similarly, the lens weighs 440 grams for Sony cameras and is slightly heavier 450 grams for Nikon cameras.

    A black Tamron camera lens with a zoom range of 16-30mm and an aperture of f/2.8, featuring textured adjustment rings and labeled as Di III VXD G2. The lens is standing upright on a white background.

    Tamron says the lens has a moisture-resistant construction, a fluorine coating on the front element, and the 16-30mm f/2.8 Di III VXD G2 also has a common 67mm front filter thread. It is also, of course, compatible with Tamron’s Lens Utility software.

    Below are a few sample photos taken with the new lens, courtesy of Tamron:

    A close-up of a Barbaresco 2001 wine bottle lying in a wicker basket, with wine corks and a baseball bat in the background on a wooden surface.

    A colorful outdoor sports court with green, blue, and yellow sections, palm trees, and a tall residential building with multicolored balconies in the background.

    Four wooden crates filled with green bitter melons, dark purple Indian eggplants, and round purple eggplants, arranged neatly in a vibrant display at a market.

    A wooden boardwalk runs past colorful historic buildings with visitor center signs in a small mountain town, with a snow-capped peak visible in the background.

    A small blue wooden building with a sign reading "Alaska Geographic Klondike Gold Rush National Historical Park" stands on a wooden boardwalk under a partly cloudy sky. Benches and wooden railings are in front.

    Green and purple aurora borealis lights swirl across a starry night sky above a snow-covered landscape, with dark silhouetted trees and a mountain in the background.

    Metal bike racks form concentric circles along a city sidewalk, with tall office buildings and traffic lights in the background, captured in black and white.

    A historic building with a clock tower and a U.S. flag is reflected in a puddle on the ground, with bare tree branches visible to one side. The image is inverted due to the reflection.

    A skateboarder performs a trick on the edge of an empty concrete pool under a bright blue sky, with a palm tree in the background.

    The 16-30mm f/2.8 Di III VXD G2 zoom lens will be available for Sony E and Nikon Z mounts for $929. The Sony E-mount version will be available on July 31 and the Nikon Z mount on August 22.


    Image credits: Tamron

    Continue Reading

  • Process and Control Today | Teledyne Gas & Flame Detection, Next-level safety: New portable PS DUO from Teledyne GFD detects two gases simultaneously

    Process and Control Today | Teledyne Gas & Flame Detection, Next-level safety: New portable PS DUO from Teledyne GFD detects two gases simultaneously

    Teledyne Gas & Flame Detection (Teledyne GFD) is unveiling its PS DUO, a portable dual-gas detector set to enhance personal safety in gas monitoring applications.

    This innovative handheld device features real-time monitoring with audible, visual (bright LED) and vibrating alarms, providing immediate alerts when gas levels exceed safety thresholds.

    The new PS DUO uses passive diffusion sensing for the continuous detection of harmful gases in potentially hazardous environments, enhancing safety for personnel. It can monitor two gases simultaneously from a wide selection that includes carbon monoxide (CO), hydrogen sulphide (H?S), sulphur dioxide (SO?), ammonia (NH?), oxygen (O?), hydrogen (H?), nitrogen dioxide (NO?) and ozone (O?).

    Users of the ATEX/IECEx-rated PS DUO can select flexible gas pairings according to their specific application. The result? Enhanced safety, flexibility and peace of mind in the field, backed by a 2-year warranty.

    For applications in regions such as the Middle East, H?S/ SO? capability will prove especially useful. The PS DUO offers a measuring range for H?S of 0~100 ppm with 0.1 ppm resolution, while 10 ppm and 15 ppm represent the low alarm and high alarm respectively. For SO?, users can take advantage of 0~20 ppm measuring range, 0.1 ppm resolution, 2 ppm low alarm and 5 ppm high alarm.

    The LCD display provides continuous real-time gas concentration information, while the internal memory supports up to 30 alarm logs. Wireless connectivity allows seamless data retrieval and safety system integration.

    Of particular note is the instrument’s rugged, IP67-rated rubberised enclosure, which combines with an ergonomic, compact (56 x 89 x 21mm) and lightweight (200g) design for optimal user comfort, convenience and safety. The PS DUO will operate for up to 2 years on a single replaceable battery under normal use.

    “With its reliable performance, flexible gas pairings and wireless integration, our PS DUO offers a powerful new option for industrial safety programmes – backed by Teledyne’s global service and support,” states Pawel KULIK, Product Manager-Portables, Teledyne Gas and Flame Detection. “It adds to an existing and highly successful range of personal safety and gas monitoring solutions that includes our portable Protégé ZM and PS200.”

    Available in O2, CO, H2S and SO2 models, the easy-to-use Protégé ZM (Zero Maintenance) single-gas monitor delivers high performance in a small, ergonomically designed package. Offering proven reliability in the field, industrial workers and first responders gain the confidence to focus on the task at hand, not on their equipment.

    Teledyne GFD’s PS200 four-gas compliance monitor is compact, lightweight, water resistant and extremely robust. This user-friendly device is a proven performer in hazardous locations with its ability to measure any combination of LEL (Lower Explosive Limit), O?, CO and H?S. The PS200 pumped gas detector features an internal sampling pump for optimal use in confined space applications.

    Request FREE information from the supplier on the products in this article

    Login or Register

    Process and Control Today are not responsible for the content of submitted or externally produced articles and images. Click here to email us about any errors or omissions contained within this article.

    Continue Reading

  • Forget Prime Day: Amazon has already dropped the improved 2024 Kindle Scribe to its lowest price yet

    Forget Prime Day: Amazon has already dropped the improved 2024 Kindle Scribe to its lowest price yet

    If you’re in the market for a premium ereader, then I can’t recommend the latest edition of the Kindle Scribe highly enough. Sure, it’s an expensive epaper tablet, but if you can pick it up at a discounted price, there’s really nothing quite like it out there.

    And a week before Prime Day 2025 officially kicks off, Amazon AU has already produced the best Kindle deal yet – dropping the price of the Scribe by up to 31%. I say ‘up to’ because each variant has a different percentage off, with the best offer being on the 16GB Tungsten colourway that’s now available for AU$449.

    This offer is also available for New Zealand shoppers too, bringing the price down to NZ$469.41 with free shipping.

    That’s still a lot of money for the Kindle Scribe, but some shoppers might be able to save an additional AU$10 / NZ$10 if they use an “eligible” Citi or NAB card (the list is provided in the Terms via the product listing) and apply the code CARD10 at checkout.

    I’ve been using the Kindle Scribe since the 2022 edition launched, then upgraded to the 2024 version soon after it was released in Australia in December last year. I love it!

    My favourite feature is the ambient light sensor which automatically adjusts screen brightness depending on the light conditions where you’re reading. So it’s perfect indoors and out, in daylight or at night. Your eyes will thank you for it.

    While it’s mainly an ereader, it’s one of the best screen and stylus combinations I’ve had the pleasure of using. Writing on the Kindle Scribe’s screen in the best there is of any epaper tablet I’ve tested – and I’ve tested a fair few of those, from the Kobo Elipsa 2E to the reMarkable 2 and even several models from Onyx Boox.

    It’s one of the few E Ink writing tablets that has native MS Word support. And then there are the AI writing features – one to decipher your scribbles and convert them to text, the other to summarise the notes in a Notebook. Both work a charm – I use the latter a lot!

    (Image credit: Sharmishta Sarkar / TechRadar)

    I admit that holding a 10-inch ereader isn’t the most ergonomic thing to do as a reader, but if you can spare the change for its official folding case, you won’t even need to hold it. I keep it propped up beside me in bed and read lying down on my side… yeah, that sounds weird now that I’ve written it out, but hey, I fall asleep easily enough while reading and Scribe goes into Sleep Mode in 30 minutes if there’s no activity. Look, mum, no hands!

    And if you want to jot notes in ebooks you’re reading, you can do that now as well. There are two ways to do so – little boxes called Active Canvas and a Side Panel Margin (yes, that’s what it’s called, how original!) for longer notes.

    The 2024 Kindle Scribe is my pick of the best premium ereader for good reason – Amazon has done well with this epaper writing tablet. My only complaint is that none of the Kindles now support Audible playback in Australia.

    If that doesn’t bother you, then grab the Scribe now – available in two colour options and three storage variants, all of which are discounted to their best prices yet. I don’t think it’s going to get any cheaper when Amazon Prime Day 2025 kicks off on July 8.

    You might also like…

    Continue Reading

  • Street Fighter 6 Aespa Juri Skin Opens Door for More Collabs

    Street Fighter 6 Aespa Juri Skin Opens Door for More Collabs

    Capcom is embracing more crossover potential and going full K-pop in Street Fighter 6, launching a massive collab with the girl group æspa to bring new looks and content to the game and, more specifically, Juri. 

    That’s right, SF6 is getting its first official collab costume that isn’t for the player characters, with Juri getting a K-pop redesign inspired by æspa. The rest of the game will receive content tied to the idol group too, ranging from new visual effects and cosmetics all the way to one of the æspa girls becoming a commentator.

    The Street Fighter 6 x æspa collab will start on July 4, 2025 and run for a full year. That means most of the crossover content will be available to purchase in the in-game shop or unlock via other means until July 3, 2026.

    Specifics on how players can unlock the special Juri Outfit 4 look inspired by the K-pop group and its virtual idol naevis will be shared once the event goes live, though Capcom does note event content will be featured in the in-game shop for the duration of the collab.

    On top of Juri getting an actual collab costume, the æspa and nævis collaboration goes way beyond just one design and an alternate color. 

    For starters, the Battle Hub will get an æspa makeover for the event, bringing the girls and themed designs to the online world of SF6. Additional Special Titles, phone backgrounds, and photo frames to use with your player character featuring æspa will also be available.

    Players who grab Juri’s æspa outfit will also unlock a K-pop remix of her theme and special visual effects that will automatically be applied when using the costume in matches.

    If that wasn’t enough, naevis, the virtual idol tied to æspa, will also be added to the game as a new real-time commentator

    Related Article: Full Street Fighter 6 Season 3 DLC Roster Revealed At Summer Games Fest

    Street Fighter 6 has rolled out multiple collabs with other brands, including anime, creators, and events in the past, but most of them have been limited to smaller cosmetics or costumes available for player characters to use in customization. The æspa collab could potentially lead to bigger, and more in-depth content that players have been asking for when it comes to these crossovers.

    Juri getting a brand new costume styled after æspa is a first for SF6, and might be a sign that Capcom is ready to bring more brands in to actually provide outfits for the main roster. Much like all of the Capcom crossover costumes featured in previous games like Street Fighter V, this could simply mean we see Monster Hunter and Resident Evil collabs with actual costumes, or the doors get completely blown open with third-party designs as well.

    Street Fighter just recently had its crossover skins with Fortnite pop back up in the shop and a massive Overwatch 2 collab, while Monster Hunter Wilds has an ongoing Street Fighter event available for players to complete. This larger K-pop collab is a great first step for Capcom broadening what kind of content SF6 will offer in future events,, while also appealing to new users in a unique way.


    Continue Reading

  • Valve fixes one of the more annoying parts of using the Steam Deck OLED

    Valve fixes one of the more annoying parts of using the Steam Deck OLED

    Summary

    • The SteamOS 3.7.13 update fixes WiFi issues on the Steam Deck OLED for a better gameplay experience.
    • Patch focuses on bug fixes for input issues, visual corruption, cursor, and power button detection.
    • A comprehensive changelog includes audio fixes, accessibility improvements, and additional platform support.

    The Steam Deck OLED is a great way to play your games on the go, which is why it’s important for it to have a strong and stable WiFi connection. I mean, sure, you can install a USB Ethernet port on it, but if you want to, say, play it online at a cafe, you’re very much dependent on the Steam Deck’s WiFi capabilities.

    Unfortunately, the Steam Deck OLED’s WiFi capabilities can be a little bit spotty, which can get really annoying when you want to play it on the go. Well, here’s some good news for you: Steam has just released a new version of SteamOS, and tucked away within its patch notes is a fix for the Steam Deck OLED’s WiFi issues.

    Related

    I already installed SteamOS on the Asus ROG Ally X, and I regret it

    This needs a little longer in the oven..

    The SteamOS 3.7.13 update fixes a ton of annoying issues

    Running GeForce Now on the Steam Deck OLED

    On the Steam Community website, Valve breaks down what’s new in SteamOS 3.7.13. By the looks of things, the main focus of this update, titled “Out Exploring,” is squashing lots of infuriating bugs. As such, if SteamOS has been acting strangely for you lately, it’s worth taking a peek at the changelog and seeing if your issue got squashed already.

    Perhaps the most welcome change is the removal of WiFi regression on the Steam Deck OLED. People have reported spotty performance with WiFi on the OLED model lately, and before now, the best advice people could give was to install the beta version with the fix on it. Now, there’s no need to leave the release channel; just update your Steam Deck and you’re good to go.

    There are also some fixes for input issues on the Asus ROG Ally, a line of visual corruption on the cursor, and SteamOS hanging on Strix Point devices. So yes, a ton of annoying issues have now bit the dust. There’s also a nice change that allows for better power button detection on third-party devices like Ayaneo and OneXPlayer systems. Nice one, Valve.

    Here’s the full changelog:

    Continue Reading

  • AI Tools: Video & Animation Come to Midjourney by Jeff Foster

    AI Tools: Video & Animation Come to Midjourney by Jeff Foster

    Midjourney was my first real text-to-image AI tool experience starting back in 2022. I’ve watched it evolve as the industry swelled with competition and lots of other image, animation and video tools popping up almost weekly since. By January 2023, the tools started to evolve to a place where they made us sit up and take notice, as I outlined in my first AI Tools article, AI Tools Part 1: Why We Need Them.

    But after years of progress and lots of testing, Midjourney has now raised the bar yet again, with the introduction of their new video tool, and I’m pleasantly surprised at what it can do so quickly and sensibly. Here’s what I’ve explored so far…

    Frame from Noir style short clip created in Midjourney Video

    Midjourney Animate

    Midjourney announced the new video animation feature and the output is quite impressive!

    I’ve been using a lot of different animation and video generation tools the past few years, as you may know if you’ve been following my AI Tools series here on ProVideo. But this is the most seamless and quickest workflow I’ve yet to engage with.

    Most generators require a starting image – like a keyframe if you will. I almost always start with an image I’ve generated in Midjourney and then gone to another tool to animate it. (You can see my last article, AI Tools: Generative AI for Video & Animation Updates for more examples of the workflow). But now in Midjourney, you can either generate a new image as your source, or start with your own photo.

    First – the details and specs…

    AI Tools: Video & Animation Come to Midjourney 43

    Currently, everyone with an account can access the Animate option, but only the Pro and Mega plans can use the Relax mode, and videos consume 8x more time to process than images, but it does provide you with 4 variations to choose from in each round.

    AI Tools: Video & Animation Come to Midjourney 44

    AI Tools: Video & Animation Come to Midjourney 45

    Video Output Sizes & Formats

    Note that the maximum resolution at the moment is 480p (832×464) and the sizes vary depending on aspect ratio of course.

    AI Tools: Video & Animation Come to Midjourney 46AI Tools: Video & Animation Come to Midjourney 47

    You can export your video in a compressed MP4 for social use, or a larger RAW MP4 H.264 version (still compressed but less) and animated GIF. You can link to the completed video’s URL as it stays in the cloud in your account.

    This is the Codec data from a “RAW” files downloaded from Midjourney:

    AI Tools: Video & Animation Come to Midjourney 48

     

    Midjourney Video Test Drive

    Of course I had to dive in and absorb all I could with this new feature and I spent a couple days running it through the paces.

    Starting off, I tried using some simple prompts for various news reporters to be used as B-roll. (I’d use something like this in a pinch to put on a screen in a shot that simulated a newscast on TV, for instance). The quality is good enough for the scale it provides (480p) but in no way intended for full-screen in this initial roll-out.

    My first step was to get some figures to animate. I entered short prompt descriptions for Midjourney to generate some examples. It’s funny what AI thinks about ages at times. And some of the results are just so wrong they’re HILARIOUS!

    AI Tools: Video & Animation Come to Midjourney 49

    AI Tools: Video & Animation Come to Midjourney 50

    AI Tools: Video & Animation Come to Midjourney 51

    After selecting the subject I wanted for each shot, I let Midjourney decide on the motion with the Auto Animate option. Each pass provides you with four different videos to choose from so you can extend out many options.

    I created this video to show you the selections and results for each subject.

    I did the same with these other examples from ChatGPT prompts and explain the process for each example.

    (Note: the VO says 840p when I know damn well it’s 480p! Linguistically dyslexic I guess!)

     

    Photo to video

    Testing out the photo to video feature in Midjourney, I used an old image of my 80’s hair rock & roll days. So much hair product back then!

    AI Tools: Video & Animation Come to Midjourney 52

    I uploaded the photo as the first frame and I let Midjourney do the work from there. I extended it just once more to make about an 8 second clip. I exported it as an animated GIF (not JIF) since there isn’t any audio. If only I was really that cool on stage!

    AI Tools: Video & Animation Come to Midjourney 53

     

    Using Midjourney for Storyboarding and Previz

    Currently, I see Midjourney as a tool for creativity and helping you bring your ideas to life. Not necessarily as an end product, but to realize how the written word can be visualized on the screen.

    This could be an amazing tool for screenwriters trying to sell a treatment, or for storyboarding scenes and shots for locations, sets, lighting and blocking.

    I created a short scene completely with AI tools in just a few steps – with two different variations to show how seamlessly Midjourney responds to prompts and extensions.

    I started with ChatGPT asking for ideas for projects to do and this was one of the results I followed.

    AI Tools: Video & Animation Come to Midjourney 54

    I was happy with the resulting images so I went with one I liked and decided to build a story around what the character ended up doing with my extended prompts.

    I must say, this was one of the most satisfying and creative projects in an intuitive workflow that I’ve done in years. And it really only took a few hours from start to finish, because I had no preconceived idea what it was going to be – and I was going to let the AI Tools be my partners as my writers, actors, sound FX, staging and camera ops. I really felt like a director of sorts.

    AI Tools: Video & Animation Come to Midjourney 55

    Each render pass provides 4 different variations based on the first frame (or continues from the last pass with each extension up to 4x). It’s very subjective to decide what take you want to use, but that’s part of the storytelling aspect. In my case, I started with the ChatGPT original prompt and then instructed the action to the end of the prompt with instructions.

    Each pass I would add a new instruction or direction. Mostly the camera moves and angles were determined by Midjourney but those can be directed more closely as well. However, it doesn’t always follow instructions for action, but you can often fool it by rewording instructions. Sometimes though, the mistakes can actually change the story and you follow a different rabbit down the hole.

    AI Tools: Video & Animation Come to Midjourney 56

    I’m including a few GIFs below showing the order of the process and the subsequent renders for each prompt instruction, and decision I made from those render result to continue on building my scene.

    Prompt (with selected start image): film noir style, trench coat detective under a streetlamp in heavy rain, black and white with subtle color tint, glistening cobblestone, intense contrast, 1950s urban alley setting, moody and mysterious, he’s holding a lit cigarette and looks around like he’s waiting on someone

    AI Tools: Video & Animation Come to Midjourney 57

    Prompt change/addition:  he starts to cross the street while the camera follows his movement and he flicks the cigarette down on the street. (He didn’t cross the street but I went with it)

    AI Tools: Video & Animation Come to Midjourney 58

    Prompt change/addition: a woman appears from the shadows on the right and runs up to him urgently. (more like a slow saunter, but it works)

    AI Tools: Video & Animation Come to Midjourney 59

    Prompt change/addition: the couple kisses and embrace.

    AI Tools: Video & Animation Come to Midjourney 60

    You’ll have to watch the video below to see what Variation 2 ended like!

    So I needed to add sound to this short scene and I needed a voice over to narrate in an appropriate tone and voice.

    I started with ChatGPT again and my AI script writing partner and I came up with some good lines. (You’ll hear both versions in the video below).

    I used the script text in ElevenLabs using their new v3 Alpha model for a more natural speech delivery and I found a great voice that really fit the time period.

    AI Tools: Video & Animation Come to Midjourney 61

    I also used ElevenLabs to produce my sound FX and music bed.

    AI Tools: Video & Animation Come to Midjourney 62

    Everything mixed easily in Adobe Premiere Pro in just minutes. And here’s the result(s)

    For more detailed info about Midjourney video options and usage instructions, visit their website.

     

    Continue Reading

  • Apple eyes AI boost with Anthropic or OpenAI as it rethinks Siri’s future

    Apple eyes AI boost with Anthropic or OpenAI as it rethinks Siri’s future

    Apple is considering using artificial intelligence (AI) technology from Anthropic or OpenAI to power a new version of Siri, sidelining its own in-house models in a potentially blockbuster move aimed at turning around its flailing AI effort.

    The iPhone maker has talked with both companies about using their large language models for Siri, according to people familiar with the discussions. It had asked them to train versions of their models that could run on Apple’s cloud infrastructure for testing, said the people, who asked not to be identified discussing private deliberations.

    If Apple ultimately moves forward, it would represent a monumental reversal. The company currently powers most of its AI features with home-grown technology that it calls Apple Foundation Models and had been planning a new version of its voice assistant that runs on that technology for 2026.

    A switch to Anthropic’s Claude or OpenAI’s ChatGPT models for Siri would be an acknowledgment that the company is struggling to compete in generative AI – the most important new technology in decades. Apple already allows ChatGPT to answer web-based search queries in Siri, but the assistant itself is powered by Apple.

    iPhones on display at an Apple store in the Huangpu district in Shanghai. Photo: AFP

    Apple’s investigation into third-party models was at an early stage, and the company had not made a final decision on using them, the people said. A competing project internally dubbed LLM Siri that uses in-house models remains in active development.

    Continue Reading

  • Apple weighs using Anthropic or OpenAI to power Siri in major reversal: Bloomberg News

    Apple weighs using Anthropic or OpenAI to power Siri in major reversal: Bloomberg News

    An Apple logo is displayed on a smartphone with stock market values in the background.

    SOPA Images | LightRocket | Getty Images

    Apple is weighing using artificial intelligence technology from Anthropic or OpenAI to power a new version of Siri, instead of its own in-house models, Bloomberg News reported on Monday.

    Shares of the iPhone maker, which had traded down earlier in the session, closed 2% higher on Monday.

    Apple has had discussions with both companies about using their large language models for Siri, asking them to train versions of their LLMs that could run on Apple’s cloud infrastructure for testing, the report said, citing people familiar with the discussions.

    Apple’s investigation into third-party models is at an early stage and the company has not made a final decision on using them, the report said.

    Amazon-backed Anthropic declined to comment, while Apple and OpenAI did not respond to Reuters requests.

    The company had in March said AI improvements to its voice assistant Siri will be delayed until 2026, without giving a reason for the setback.

    Apple shook up its executive ranks to get its AI efforts back on track after months of delays, resulting in Mike Rockwell taking charge of Siri, as CEO Tim Cook lost confidence in AI head John Giannandrea’s ability to execute on product development, Bloomberg had reported in March.

    At its annual Worldwide Developers Conference earlier this month, Apple focused more on incremental developments that improve everyday life — including live translations for phone calls — rather than the sweeping ambitions for AI that Apple’s rivals are capitalizing.

    Apple software chief Craig Federighi had then said it is opening up the foundational AI model that the iPhone maker uses for some of its own features to third-party developers, and that the company will offer both its own and OpenAI’s code completion tools in its key Apple developer software.

    Continue Reading

  • 5 reasons why I’m moving my photo and video editing to free, open-source, and self‑hosted tools

    5 reasons why I’m moving my photo and video editing to free, open-source, and self‑hosted tools

    I didn’t wake up one day and just decide to ditch proprietary software for free and open-source tools. But in my transition away from Adobe, it was the natural next step. I needed a new stack of tools that not only measured up in their capabilities but also didn’t have the subscription burden, privacy concerns, or ecosystem lock-in.

    Enter my new toolkit: GIMP, Darktable, Krita, Inkscape, Penpot, OpenShot, Kdenlive, and more. And I’m also self-hosting PhotoPrism to manage all my media files. This kit not only holds up, but it outperforms some of the bigger names in the creative software space. Here’s why I’ve made the shift to tools that I can host, own, and control.

    Related

    7 reasons why I keep coming back to open-source creative software

    Open-source tools always pull me back in

    5

    No subscriptions

    I’m not renting my creativity anymore

    Starting with probably the biggest benefit of switching to free and open-source: no more subscriptions. Creative software has been the Netflix of creativity for some time now, and I was getting subscription fatigue. This is especially frustrating for tools I don’t use often but need access to on occasion, like Illustrator or Premiere Pro.

    With open-source apps like Darktable for photo editing, Kdenlive for video editing, and PhotoPrism for media management, I get serious capabilities without any of the useful stuff being locked behind a paywall. There are also no features to unlock with a higher tier for most of these apps. That’s all there is to it — everybody loves a free and capable app.

    Related

    I canceled all my Adobe subscriptions: Here’s what I’m using instead

    Creativity doesn’t need to be locked behind a paywall

    4

    Local control

    Actual ownership of my projects

    Another big draw was the fact that I can run some of these tools on my own machine, so not just installing them, but self-hosting them. Take PhotoPrism, for example; it’s not just a media viewer, it’s a full-blown media manager that I’m running on my own PC through Docker. So it doesn’t store my projects in someone else’s cloud, and everything stays on my hard drive until I choose otherwise.

    This way, I’m protected from third-party server crashes and data mining, and there’s no auto-syncing I didn’t ask for. Self-hosting also gives me a lot more customization over my apps and how I access them. Overall, local control over your files feels superior once you’ve gotten a taste of it.

    Related

    I don’t pay for any creative app on Windows or Mac, I use these open source tools instead

    Unlocking creativity without the cost

    3

    Privacy is an actual feature

    I know exactly where my data lives

    In the same vein as above, privacy concerns disappear with open-source, self-hosted tools. Usually, when I upload or sync my projects to a service like Creative Cloud, there’s a little itch of anxiety — what are they going to do with my files? And these worries are entirely valid when considering privacy policy scandals surrounding major industry players, particularly regarding user data being fed into AI training models.

    With my media on my own drive, I don’t have to scour through the terms and conditions of an app to ensure it’ll remain protected. Or worry that my images are being analyzed to sell me ads. Or be surprised to find new AI training clauses. And even non-self-hosted tools like Darktable and Kdenlive don’t sell, track, or report usage; they simply work. In 2025, this kind of digital silence is refreshing.

    Related

    I replaced Google Photos, Drive, and Docs with these free self-hosted services — and they’re better, too

    If you want to de-Google your life, then these are some of the best services to get started.

    2

    Pro features without the bloat

    Lean but still powerful

    I’ll give credit where it’s due – proprietary software often has better features. Photoshop has a more comprehensive masking and layering system, while After Effects’ keyframing offers greater precision. But that doesn’t mean we should sleep on open-source features.

    For example, I prefer GIMP’s Foreground Select tool for its speed and efficiency, and its Cage Transform tool is also better than Photoshop’s Puppet Warp. Kdenlive also measures up to Premiere Pro in terms of its multi-format, multi-track, audio, proxy editing, and rendering capabilities. Darktable can also handle your RAW shots just as well as Lightroom.

    Sure, you lose some with open-source, but you also win a lot. My new stack takes up way less space on my computer, and some of them even run faster than their paid counterparts. I’m happy to give up a handful of proprietary features for a faster, lighter app that can do the job just as well. Oftentimes, I didn’t even use most of the features in paid tools and just kept the subscription for some niche feature. Keeping a resource-hungry, space-gobbling app for a handful of features isn’t worth it.

    Related

    4 things GIMP does better than Photoshop

    There are some things that Photoshop could learn from GIMP

    1

    No format or ecosystem lock-in

    No more one-brand workflows

    Closed-source software often traps you in its ecosystem, whether through its formats, integrations, or cloud services. If you shoot images on a Canon or even your iPhone, you edit them in Lightroom, export to Photoshop, sync to the Cloud, and now you’re locked into that workflow. Open-source is a bit more like Lego — you can stack your design toolkit however you need and don’t have to worry about proprietary codecs and formats, making it easier to shift between apps.

    Related

    5 best open-source alternatives to Photoshop

    Find alternatives to Photoshop without risking your privacy

    Open-source isn’t perfect, but it’s empowering

    I’m not saying everyone should dump their favorite paid tools. If the Adobe suite works for you, there’s no need to switch. However, if you’ve been feeling boxed in, experiencing subscription fatigue, concerned about privacy, or simply bored with the same old tools, it’s worth exploring what free, open-source, self-hosted tools have to offer. I’m certainly happy I made the switch.

    Continue Reading