Technology

Matterport grabs $5M more to accelerate deep learning development for their 3D capture tech

3d capture tech

Matterport is picking up new funding as it looks to speed the development of deep learning tech in its capture technology which brings immersive views of spaces into 360-degree 3D

The company, which largely specializes in scanning spaces for commercial and real estate purposes, announced today that they’ve picked up $5 million in funding from Ericsson Ventures. This strategic raise brings the company’s total announced funding to $66 million according to Crunchbase.

As 3D rendering grows more important thanks to spatial computing platforms like VR and AR, Matterport has one of the biggest libraries of 3D environments thanks to its loyal and prolific users who have uploaded over a half million scans of public and private spaces which are already viewable in VR.

A big focus of this new investment is taking these 3D scans and striving to gather more and more insights from them through deep learning-based AI development which will not only help them understand what’s in a space but how to improve the quality of the 3D images themselves.

“Ericsson Ventures saw the tremendous opportunity Matterport has to extend our technology lead by using our massive library of 3D models as a deep learning training dataset to create AI that will be the basis for our next generation products,” Matterport CEO Bill Brown said.

In May the company launched its Pro2 camera, which addressed a big request from existing customers who were excited about the potential of 3D 360 room scans but still needed 2D images to put into print materials. The camera retails for $3,995 and is available now.

Huddesfield Designers Bring New Ginetta Racing Car to Life

The in-house design team at the 3M Buckley Innovation Centre (3M BIC) has used 3D technology and augmented reality to help Ginetta fine tune its latest prototype. 

3d racing car

Having already provided a similar service for the launch of its first prototype in 2015, Ginetta approached the 3M BIC design team to animate its £1.3millon LMP1 machine.

This enabled the car manufacturer’s own in-house design team to visualise the cars development, as well as showcase it at a launch event at Silverstone Circuit to potential buyers.

Ewan Baldry, technical director at Ginetta, said: “3D technology is an important part of our design process and marketing. To see something on a flat CAD screen has a few limitations, so being able to see something you can move around is very helpful.

“The main thing with a project such as this, from a marketing point of view, is to show credibility in the early stages to demonstrate to people the direction you are heading in, therefore having 3D visuals was key.”

The animation for the LMP1 car was created using physical STL data (used for Computational Fluid Dynamics (CFD) testing or wind tunnel analysis) submitted to the 3M BIC design team by Ginetta.

Some adjustments had to be made to the original model in order for it to be re-textured with the corresponding racing livery, using Autodesk 3DS Max.

The team then rigged the car for animation and set the lighting for rendering purposes.

Paul Tallon, lead consultant designer at the 3M BIC, said: “3D rendering is a process in which an algorithm calculates the movements of a virtual photon on interaction with a surface of varying qualities.

“With the 3M BIC’s High Performance Computer and the latest Vray rendering software, we were able to get the detail to look as real life as possible in our render. This was particularly important for Ginetta who was looking for a realistic render to show their clients.”

As well as the on-screen render, the design team produced the car in augmented reality (AR) for use with the Microsoft Hololens, enabling people to walk around a scaled down holographic version of the car.

A 3D model was also printed in nylon by selective laser sintering (SLS) using the industrial additive manufacture printer on the 3M BIC’s Innovation Avenue, all of which were showcased at the launch event at Silverstone.

Ewan added: “Having worked with the 3M BIC team previously we knew they’d do the project justice. Again, we were really pleased with the service. We didn’t give them very much time, but they still produced something which was professional and to a high standard.”

Significant interest in the LMP1 has already been expressed following the launch event, from both new and existing customers.

The 3M BIC design team is currently working on the next stage of the process which involves creating a serious gaming experience that allows users, particularly racing drivers, to virtually test the LMP1 car on a track with varying different scenery and weather conditions to enhance the driver experience.

Leeds-based Ginetta, the leading British race car manufacturer, was founded in 1958 and acquired by racing driver and businessman Lawrence Tomlinson in 2005.

Since then it has taken the racing industry by storm, selling cars across the world and training some of the brightest stars in motorsport.

Source: bqlive.co.uk

VR tech helps to develop ships of the future

Tritec Marine is using Virtalis’ new ActiveMove CVR system which integrates a Head-Mounted Display (HMD) to form a small, turnkey VR solution in a box.

Ships VR

David Scott, Director and General Manager, Tritec Marine explained: “We had been investigating Virtual Reality (VR) for some time, ever since we attended an industry conference on the digital enterprise, and we saw real value in bringing our CAD data into 3D to fully communicate our conceptual designs.”

Tritec is known for naval architecture and embedding teams of engineers to supervise builds in China and Korea, but the company is increasingly moving towards developing concept ship designs which directly solve existing and challenging maritime transportation problems or improve on current practices.

“We have to work on overturning preconceived ideas,” said Scott, “as our design concepts have been developed from first principles, not from what is there already. We realised that VR isn’t just for gaming and consumer sales and that for us the value will lie in being able to walk disparate stakeholders through our concepts. I experienced CyberAnatomy and thought that I very quickly understood more about the human anatomy than I ever could have assimilated from books. Then we discovered that Virtalis already operated in this sector and that its Visionary Render software can take our CAD data and swiftly render it into virtual 3D ships.”

Visionary Render delivers advanced rendering of huge models in real-time with ease of importing from a range of data sources, maintaining naming, hierarchies and the all-important metadata.

ActiveMove CVR combines a best in class consumer headset and a VR-ready Lenovo laptop integrated into a custom designed case to provide a VR solution that can be assembled in minutes.

“Since we have ventured into the virtual world”, commented Scott, “we have had a veritable tsunami of ideas about how we can use the technology, from virtual prototyping before the build to digital twinning for maintenance. It is apparent that VR technology makes cost and time savings from day one, because the snagging is done in the virtual world, not in the real world. So far, we have only shown our models via CVR and Visionary Render to internal stakeholders, but they have been very impressed and it is clear that VR helps us get our message across to different audiences from different backgrounds.”

The first project that CVR is being deployed on involves radical concept designs for ships transporting Liquefied Natural Gas (LNG) and Liquefied Petroleum Gas (LPG). With a recognition that autonomous ships are considered by many to be the future of commercial shipping, Tritec is developing a revolutionary ship/port interface that automatically moors and unloads its cargo.

Source: dpaonthenet

25 Fastest Gaming Laptops Ranked

These are the gaming laptops we’ve tested with the best 3D performance over the past year.

gaming laptops

Gaming on a laptop is no longer the frustrating compromise it once was. Slimmer designs paired with more powerful processors and graphics cards have brought gaming laptops closer than ever to performance previously found only in desktops.

And the pace of innovation hasn’t slowed down. Just in the past year, laptops can now easily support high-end virtual reality headsets like the Oculus Rift and HTC Vive, and new designs can fit top-tier graphics hardware into very slim laptop bodies, as in the case of the 17mm thick Asus Zephyrus, which is the thinnest laptop with an Nvidia GeForce 1080 GPU.

Putting gaming laptops to the test

For this roundup, we’ve taken all the laptops with discrete graphics hardware tested over the past 12 months, and ranked them based on 3D performance. When testing a gaming laptop or desktop, we run preset tests using several games, including Deus Ex: Mankind Divided, Bioshock Infinite, and others, along with standard benchmarks like 3DMark, which is designed to test a computer’s 3D graphic rendering capabilities.

For this list, we’re ranking the laptops in order of 3DMark scores, but the real-world game scores (presented as the number of frames of animation per second the laptop can render) match very closely. Note that these scores are specifically for the exact configurations of each laptop we tested, and almost all can be configured with a wide range of options.

The winners are…

The results offer few surprises. The handful of laptops with dual video cards (rare in a laptop) came out on top, followed by laptops with a single Nvidia 1080 GPU and so on down the list. The No. 1 spot is held by the most expensive laptop we’ve ever reviewed, the $9,000 Acer Predator 21 X. But at more reasonable prices, systems from Asus, Alienware, Origin PC, Lenovo, HP, MSI and Razer, among others, are all represented.

As we test many more everyday laptops than gaming ones, the last few spots get us into crossover territory, with Nvidia and AMD GPUs that aren’t really for gaming, so serious gamers should stick with something that has at least an Nvidia 1050 graphics card.

More details on each laptop, including links to reviews and benchmark scores, are in our roundup gallery, with a top-level overview below. We’ll update the rankings as new gaming laptops are tested in the CNET Labs.

acer-predator-21-x-22.jpg

Acer’s frankly insane $9,000 Predator 21 X was the top performer in this roundup.

Sarah Tew/CNET

Top 25 Gaming Laptop Performers

System Name 3DMark Fire Strike Ultra score Graphics Card
1 Acer Predator 21X 9444 (2) Nvidia GeForce GTX 1080
2 MSI GT83 8594 (2) Nvidia GeForce GTX 1080
3 Asus ROG G701V 5226 Nvidia GeForce GTX 1080
4 Alienware 17 R4 5024 Nvidia GTX GeForce GTX 1080
5 Origin PC Eon17-X (2017) 4970 Nvidia GeForce GTX 1080
6 Origin PC Eon17-X (2016) 4919 Nvidia GeForce GTX 1080
7 Razer Blade Pro 4456 Nvidia GeForce GTX 1080
8 Asus ROG G752VS-XS74K 4126 Nvidia GeForce GTX 1070
9 Asus ROG Zephyrus 4095 NVIDIA GeForce GTX 1080 with Max-Q Design
10 Alienware 15 R4 4054 Nvidia GeForce GTX 1070
11 HP Omen (17-inch) 3816 Nvidia Geforce GTX 1070
12 Origin PC Evo 15-S 2671 Nvidia Geforce GTX 1060
13 MSI GS73VR-7RF Stealth Pro 2647 Nvidia GeForce GTX 1060
14 Alienware 13 R3 (OLED late 2016) 2609 Nvidia Geforce GTX 1060
15 Razer Blade 2593 Nvidia GeForce GTX 1060
16 Lenovo Legion Y720 2523 Nvidia GeForce GTX 1060
17 Dell Inspiron 15 7000 Gaming 1871 Nvidia GeForce GTX 1050Ti
18 Origin PC EON15-S 1861 Nvidia GeForce GTX 1050Ti
19 Lenovo Legion Y520 1855 Nvidia GeForce GTX 1050 Ti
20 Asus ROG Strix GL753VE-DS74 1822 Nvidia GeForce GTX 1050Ti
21 Acer Aspire VX 15 -591G 1252 Nvidia GeForce GTX 1050
22 Dell XPS 15 (2017) 1242 Nvidia GeForce GTX 1050
23 Wacom MobileStudio Pro 16 810 Nvidia Quadro M100M
24 Samsung NoteBook 9 Pro 547 AMD Radeon 540 Graphics
25 HP Spectre x360 357 Nvidia GeForce 940MX

 

Source: cnet.com

iPhone 7 Plus and iPhone 8 – All Things You Need to Know

The latest batch of dummy “iPhone 8” and “iPhone 7 Plus” series phones have apparently made their way out of China, as a pair of videos on Wednesday offer hands-on looks at what appears to be identical device mockups.

In a first video, YouTube creator Danny Winget got his hands on what he claims to be an “iPhone 7s Plus” prototype (dummy phone), though the part is almost assuredly a mockup based on leaked CAD renderings and rumors.

Like alleged “iPhone 7s” and “7s Plus” dummy units photographed earlier today, the mockups in Winget’s video are emblazoned with “Conformité Européenne” (CE) and battery disposal iconography. Apple digitized regulatory markings with iPhone 7 in the U.S. and moved the icons to the “About” section in Settings, leaving only the “iPhone” logo above small text reading “Designed in California Assembled in China” and information regarding model number, FCC identifier and IC code. International models incorporate regulatory marks, but are much longer than simply “CE.”

While the dummy unit is probably a knock-off, its design could be based on legitimate schematics. Apple suppliers in China have been known to leak sensitive data, including final design molds, documents and internal components.

As seen below, the “iPhone 7s Plus” dummy unit is expectedly similar to current iPhone 7 Plus hardware in terms of component positioning, bezel design and dimensions. The only obvious difference is a glass back, which appears to sport thinner antenna lines than existing iPhone models. Apple is anticipated to employ a glass chassis in all 2017 iPhone models to facilitate wireless charging.

Winget goes on to compare the “7s Plus” against a supposed “iPhone 8” unit, illustrating the extreme deviation in display size and obvious aesthetic differences. While the “7s Plus” model boasts Apple’s normal thick “chin” and “forehead” bezels, the “iPhone 8” bezels are almost nonexistent.

Notably, Winget’s “iPhone 8” sports white bezels, contradicting recent reports that Apple intends to limitfront face color options to black when the device launches. Whether the company plans to release a version with white bezels, as is available on certain iPhone configurations, is unclear.

A second video from Techtastic, also posted today, reveals what appears to be an “iPhone 8” chassis and front screen assembly. Both the chassis and front face are done in black, consistent with recent rumors.

Not much can be gleaned from the video, but it does give a sense of what the device might look like in a user’s hand.

Closer inspection of Winget’s mockup and the Techtastic unit shows both dummy models are identical to parts featured in today’s image from leaker Sonny Dickson. Further, a separate “leak” on Wednesday featured a gold copper colored “iPhone 8” showing the same “CE” and battery disposal indicia. Considering the timing and apparently identical markings, each of the components seen today seem to originate from a single source.

Apple is expected to debut “iPhone 8” alongside incremental changes to the iPhone 7 series at a special event in September. The new flagship smartphone is thought to include new and exotic technologies never before seen in Apple’s product line. A number of these features, including facial recognition, 3D-sensing cameras, a home button-less display, high-definition video recording, “SmartCam” photo and video capture, and more, have been all but confirmed by Apple’s inadvertent release of HomePod firmware late last month.

Source: Appleinsider.com

Rendering Now Used by Law Enforcement to Solve Plane Crash Investigation

FARO Laser Scanner Render

Investigators will be able to view the entire scene of a recent fatal plane crash on Interstate 15 in extreme detail from any angle they want because of the high-tech equipment used to document the scene.

The FARO X330 uses lasers and a camera to construct any scene around it, resulting in a high-definition 3D map.

Sgt. Randall Akers, the accident investigation program manager for the Utah Highway Patrol, said the department bought seven of the scanners in 2014 and each cost about $40,000.

Akers said the machine takes multiple scans to document a typical crime scene and each scan takes between 4 and 12 minutes.

“Like any laser measurement device it shoots out a beam and gets a return to measure distance,” he said. “It does it in 360 degrees — in a circle.”

Akers said the scanner is particularly useful when it comes to plane crashes because law enforcement responding to the scene aren’t experts in that field. Using the FARO, they can get a true to life 3D rendering of everything — from cars on the side of the road to miniature pieces of debris — and send it off to qualified investigators.

The FARO is about the size of an XBox console and is weather resistant. The data it collects is analyzed with a program called SCENE.

Utah Highway Patrol Sgt. Todd Royce said the machines have been used at major crashes and some crime scenes, and Akers estimated they’re in use about once a week.

Akers said they use the FARO even at small crime scenes because, for instance, sometimes just using a single laser point to measure where a gun sits in a crime scene leaves crucial evidence behind.

“What if there happened to be some element to the handgun that didn’t get captured in a picture or something else?” he said. “Whatever that was, it’s gone.”

Akers said he was initially hoping the FARO would speed up crime scene analysis times to — in the case of the plane crash — speed up road opening times. He said that hasn’t turned out to be the case because of the multiple scans required and time it takes to set up scene markers for easier analysis, but they’re still incredibly useful.

The FARO scanner has other applications aside from accident reconstruction which includes industrial inspections, reverse engineering, and robot calibration, according to the FARO website.

Royce said the FARO renderings will be used by the highway patrol and the State Bureau of Investigation and handed off to the Federal Aviation Administration and National Transportation Safety Board if requested.

Source: standard.net

3D graphics of Nvidia Uses AI are Now Better Than an Artist

Nvidia spans both gaming graphics and artificial intelligence, and it is showing that with its announcements this week at the Siggraph computer graphics event in Los Angeles.

Those announcements range from providing external graphics processing for content creators to testing AI robotics technology inside a virtual environment known as the Holodeck, named after the virtual reality simulator in the Star Trek series. In fact, Nvidia’s researchers have created a way for AI to create realistic human facial animations in a fraction of the time it takes human artists to do the same thing.

“We are bringing artificial intelligence to computer graphics,” said Greg Estes, vice president of developer marketing at Nvidia, in an interview with GamesBeat. “It’s bringing things full circle. If you look at our history in graphics, we took that into high-performance computing and took that into a dominant position in deep learning and AI. Now we are closing that loop and bringing AI into graphics.”

“Our strategy is to lead with research and break new ground,” he said. “Then we take that lead in research and take it into software development kits for developers.”

Above: Nvidia’s Optix 5.0 can “de-noise” images by removing graininess.

Image Credit: Nvidia

Nvidia has 10 research papers this year at the Siggraph event, Estes said. And some of that will be relevant to Nvidia’s developers, which number about 550,000 now. About half of those developers are in games, while the rest are in high-performance computing, robotics, and AI.”

Among the announcements, one is particularly cool. Estes said that Nvidial will show off its Isaac robots in a new environment. These robots, which are being used to vet AI algorithms, will be brought inside the virtual environment that Nvidia calls Project Holodeck. Project Holodeck is a virtual space for collaboration, where full simulations of things like cars and robots are possible. By putting the Isaac robots inside that world, they can learn how to behave, without causing havoc in the real world.

Above: The Project Holodeck demo

Image Credit: Dean Takahashi

“A robot will be able to learn things in VR,” Estes said. “We can train it in a simulated environment.”

Nvidia is providing external Titan X or Quadro graphics cards through an external graphics processing unit (eGPU) chassis. That will boost workflows for people who use their laptop computers for video editing, interactive rendering, VR content creation, AI development
and more, Estes said.

To ensure professionals can enjoy great performance with applications such as Autodesk Maya and Adobe Premier Pro, Nvidia is releasing a new performance driver for Titan X hardware to make it faster. The Quadro eGPU solutions will be available in September through partners such as Bizon, Sonnet, and One Stop Systems/Magma.

Nvidia also said it was launching its Optix 5.0 SDK on the Nvidia DGX AI workstation. That will give designers, artists, and other content-creation professionals the rendering capability of 150 standard central processing unit (CPU) servers.

The tech could be used by millions of people, Estes said. And that kind of system would cost $75,000 over three years, compared to $4 million for a CPU-based system, the company said.

OptiX 5.0’s new ray tracing capabilities will speed up the process required to visualize designs or characters, thereby increasing a creative professional’s ability to interact with their content. It features new AI “de-noising” capability to accelerate the removal of graininess from images, and brings GPU-accelerated motion blur for realistic animation effects. It will be available for free in November.

By running Nvidia Optix 5.0 on a DGX Station, content creators can significantly accelerate training, inference and rendering (meaning both AI and graphics tasks).

“AI is transforming industries everywhere,” said Steve May, vice president and chief technology officer of Pixar, in a statement. “We’re excited to see how Nvidia’s new AI technologies will improve the filmmaking process.”

On the research side, Nvidia is showing how it can animate realistic human faces and simulate how light interacts with surfaces. It will tap AI technology to improve the realism of the facial animations. Right now, it takes human artists hundreds of hours to create digital faces that more closely match the faces of human actors.

Nvidia Research partnered with Remedy Entertainment, maker of games such as Quantum Break, Max Payne and Alan Wake, to help game makers produce more realistic faces with less effort and at lower cost.

Above: Nvidia is using AI to create human facial animations.

Image Credit: Nvidia

The parties combined Remedy’s animation data and Nvidia’s deep learning technology to train a neural network to produce facial animations directly from actor videos. The research was done by Samuli Laine, Tero Karras, Timo Aila, and Jaakko Lehtinen. Nvidia’s solution requires only five minutes of training data to generate all the facial animation needed for an entire game from a simple video stream.

Antti Herva, lead character technical artist at Remedy, said that over time, the new methods will let the studio build larger, richer game worlds with more characters than are now possible.

Already, the studio is creating high-quality facial animation in much less time than in the past.

 

“Based on the Nvidia research work we’ve seen in AI-driven facial animation, we’re convinced AI will revolutionize content creation,” said Herva, in a statement. “Complex facial animation for digital doubles like that in Quantum Break can take several man-years to create. After working with Nvidia to build video- and audio-driven deep neural networks for facial animation, we can reduce that time by 80 percent in large scale projects and free our artists to focus on other tasks.”

In another research project, Nvidia trained a system to generate realistic facial animation using only audio. With this tool, game studios will be able to add more supporting game characters, create live animated avatars, and more easily produce games in multiple languages.

Above: AI can smooth out the “jaggies,” or rough edges in 3D graphics.

Image Credit: Nvidai

AI also holds promise for rendering 3D graphics, the process that turns digital worlds into the life-like images you see on the screen. Film makers and designers use a technique called “ray tracing” to simulate light reflecting from surfaces in the virtual scene. Nvidia is using AI to improve both ray tracing and rasterization, a less costly rendering technique used in computer games.

In a related project, Nvidia researchers used AI to tackle a problem in computer game rendering known as anti-aliasing. Like the de-noising problem, anti-aliasing removes artifacts from partially-computed images, with this artifact looking like stair-stepped “jaggies.” Nvidia researchers Marco Salvi and Anjul Patney trained a neural network to recognize jaggy artifacts and replace those pixels with smooth anti-aliased pixels. The AI-based solution produces images that are sharper (less blurry) than existing algorithms.

Nvidia is also developing more efficient methods to trace virtual light rays. Computers sample the paths of many light rays to generate a photorealistic image. The problem is that not all of those light paths contribute to the final image.

Researchers Ken Daum and Alex Keller trained a neural network to guide the choice of light paths. They accomplished this by connecting the math of tracing light rays to the AI concept of reinforcement learning. Their solution taught the neural network to distinguish the paths most likely to connect lights with virtual cameras, from the paths that don’t contribute to the image.

Above: Nvidia uses AI to figure out light sources in 3D graphics.

Image Credit: Nvidia

Lastly, Nvidia said it taking immersive VR to more people by releasing the VRWorks 360 Video SDK to enable production houses to livestream high-quality, 360-degree, stereo video to their audiences.

Normally, it takes a lot of computation time to stitch together images for 360-degree videos. By doing live 360-degree stereo stitching, Nvidia is making life a lot easier for the live-production and live-event industries, said Zvi Greenstein, vice president at Nvidia.

The VRWorks SDK enables production studios, camera makers and app developers to integrate 360 degree, stereo stitching SDK into their existing workflow for live and post production. The Z Cam V1 Pro (made by VR camera firm Z Cam) is the first professional 360 degree VR camera that will fully integrate the VRWorks SDK.

“We have clients across a wide range of industries, from travel through sports, who want high quality, 360 degree video,” said Chris Grainger, CEO of Grainger VR, in a statement. “This allows filmmakers to push the boundaries of live storytelling.”

Source: Venturebeat.com

That Million Dollar Curb Appeal for the DIY-er

exterior rendering

This article is originally posted at smoothair.ca. If you want to read full article, visit http://www.smoothair.ca/home/million-dollar-curb-appeal-diy-er/

April showers bring May flowers…or so they once said! Hopefully that English proverb holds true for us here in the Pacific North West this spring. While it has been dreary outside, long days and warmer weather await!

But what to do with your front yard once the weather turns?

Right now it is useless. It looks tired. It looks old. The neighbours think the house is abandoned (not actually but you get the point). This spring you are going to make your yard scream modernity – and on a DIY budget no less! Now roll up your sleeves and prepare to transform your tired yard into a magnificently modern masterpiece.

Lawn TLC

The key to a beautiful house is a beautiful lawn. Spring and autumn are the best seasons for lawn repair and a bag of turf builder, combined with a regiment of moss killer and weed remover, are quick and cheap options to restore that deep green hue.
Consider borrowing your neighbour’s push-mower to get that short uniform look and do not forget to edge the lawn. With proper edging and a close shave, you will have a fantastic looking lawn all summer – don’t forget to water!

Plant A Tree

A regularly pruned tree of moderate height, say…no larger than 7 feet, can be a focal point in your garden. It draws eyes from the house itself and provides depth to a well manicured lawn that is cut both short and uniform. You can grab a juvenile tree for around $20.

Be Bold: Stain Your Cement

The world of stained cement is a marvelous one. You are only limited by your imagination…and maybe your budget. But I know you are a DIY master with a small pocketbook. Fear not! The people over at doityourself have a wonderful breakdown of staining that cement, and for as little as 50 cents per square foot. Not bad at all!

Accentuate with Native Shrubs

Small, upright shrubs placed in strategic areas in your front yard provide multiple benefits. Shrubbery helps to define features by framing or highlighting them and provides privacy when used as a hedge.

Using native vegetation is better because you know it can survive in your yard and requires less maintenance over time. It can also be cheaper as you can transplant vegetation from a local forest significantly reducing the costs associated with preparing your yard (don’t snag any plants from your neighbourhood park).

Also, using native vegetation will attract wildlife such as butterflies and birds. Local vegetation can be used to create waterwise gardens too; meaning you can have a garden that looks like a million bucks, and is environmentally friendly!

Modern Lighting for Any Budget

At this point your front yard is most likely the envy of the block. But you can really make this thing pop. Small, stainless steel bollards are a perfect accent to a well manicured yard that can give that million-dollar curb appeal for under 100 hundred dollars.

There is a HUGE selection of bollards to choose from, but we are going for an ultra modern aesthetic and short, stainless steel bollards are all the rage. If you buy solar, you don’t have to pay for electricity or bury any wire! Double Bonus.

I should warn you though. Making these simple, easy changes to your front yard may cause realtors to approach you with an offer of selling your house. That is the price a successful DIY-er must pay for their ingenuity.

Credit to Realspace 3D for this post.

The Best At Home Espresso Machine

best at home espresso machine

This is article is originally post at MyEspressoShop.com.

So you’re finally ready to upgrade to a professional-grade espresso machine? Congratulations! We know it’s a big step, so we’re here to help you along the way.

Buying a high-end espresso machine is one of the best investments (if not the best investment) a coffee lover can make. Not only will it allow you to be able to enjoy real authentic espresso, cappuccinos, and ‘velvety’ lattes in your own home or office, but it will also allow you to save money in the process (win-win!).

We feel it’s important to emphasis this point again: With a good machine, you will be able to make your own drinks at a fraction of the cost you would have to pay at coffee shops. With that being said, not all espresso machine are created equal. When you are buying your espresso machine, you want to be sure to choose the right one.

Below, we will be going over some of the different options you can choose from that are currently on the market. (Also, please note that when we say ‘best’, this is simply our personal opinion. Everyone has their own preferences when it comes to features and qualities they prefer in an espresso machine, so we just wanted to share some of our favorites with you!).

Best Manual Espresso Machines

La Pavoni Europiccola Manual Espresso Machine. This particular espresso machine by La Pavoni is arguably the ‘top of the line’ in the entire market for manual espresso machines. Not only does it feature a classic design with its inclusion of the retro press (which essentially gives you complete control over every aspect of your shot), but it also has a beautiful chrome finish to it.

The machine is completely manual, so it will take some time to get used to. You will need to practicing using it to get comfortable with the amount of pressure you need to use, as well as figuring out the right pulling technique to really get the best possible shot each and every time. Because of the manual nature of this machine, it is more recommended for those that are willing to learn, have patience, and really want full control over the entire process.

This is not the machine to purchase if you are simply looking to find one that does all of the work with the click of a button.

Best Semi-automatic Espresso Machine

The Rocket Espresso R58 Espresso Machine is by far one of the best options if you are looking at semi-automatic espresso machines. Manufactured by Rocket Espresso, the R58 has a level of craftsmanship that absolutely shines. Because it features dual boilers for both steaming and brewing, it is going to provide you with the ultimate level of control.

Along with this, it comes with a plugin PID with a large display that can provide you with the details you need to be able to change the temperature of both boilers at once. You can either utilize the included water reservoir or simply pump a constant supply of fresh water into the machine. While Rocket’s R60V espresso machine is also fantastic, it’s quite a bit more expensive than the R58. That’s why we give Rocket’s R58 machine the edge.

Best Automatic Espresso Machine

La Spaziale S1 Dream T Espresso Machine. This was a harder category to choose for, because there are many fantastic options. But the majority of us feel that the La Spaziale S1 Dream T is the number one automatic espresso machine on the entire market. It comes with a fully featured programmable touchpad and features two different boilers which will allow you to both brew espresso and steam milk at the same time.