With SIGGRAPH 2010 just around the corner, we are bound to see many exciting advances in graphics technology. AMD has already announced a desktop driver that supports OpenGL ES 2.0, the embedded version of the 3D programming API OpenGL. You can download a beta here.
This is not an emulator or some sort of layer on top of OpenGL, but a full implementation of OpenGL ES accessed through EGL. Why is this important? How will this make your PC experience better? There are three reasons.
First, OpenGL ES on all modern AMD graphics desktops gives developers a unified environment. Game developers can write one 3D pipeline that can run on an HTC Evo, an Apple iPad, and on desktop and laptop PCs.
Second, this change will make web experiences faster and richer in the immediate future. WebGL is based on OpenGL ES. Web browsers will be able to use OpenGL ES directly on AMD hardware instead of having to translate every call to some other API first.
Third, mobile developers can create mobile content first directly on a PC without having to wrangle with SDKs or emulators to make sure their code functions correctly on OpenGL ES. This makes developers lives easier and speeds up the whole development process.
Look for OpenGL ES to make a big impact in bridging the gap with mobile devices and making 3D graphics more accessible.
Thanks again to the guys over at Google, WebGL continues to prove it’s validity. By using the HTML 5 Canvas object and WebGL, the Web Toolkit guys have been able to bring the Quake II game and engine to any compatible web broswer. No plugins necessary. OpenGL can be directly accessed by a web page. Great proof of concept! You can read more on the Web Toolkit blog here.
OpenGL has gone through many changes in the last two years. With the advent of OpenGL 3.3 and 4.0, many new hardware features are now accessible to 3D applications. Unfortunately there are limited resources for developers to get up to speed on all of these changes.
That’s why we are hard at work on the next edition of the OpenGL SuperBible, the definitive guide to getting started with OpenGL. Stay tuned for more information.
Now for a closer look. Tessellation changes the shader pipeline landscape in some important ways. First, it adds two new shader stages called tessellation control and evaluation right after the vertex shader. If shaders are bound to each stage geometry data would flow through the shader pipeline in the following order:
Vertex -> Tessellation Control -> Evaluation -> Geometry -> Fragment -> (framebuffer)
The tessellation stages operate on a patch of geometry which consists of a collection of vertices and per-vertex attributes as well as per-patch attributes. You can control the number of vertices in a patch directly through the API. Tessellation control shaders are run once for each vertex in a patch and have access to the attributes in all other vertices for the patch. The shader then can output a new vertex modified based on patch information. Tess control shaders can access uniforms and textures in much the same way as a vertex shader can. This shader also controls the level of tessellation to be performed. Tessellation control shaders are optional.
The resulting patch, either directly output by the vertex shader or processed by the tess control shader if one is present, is then tessellated by the hardware primitive generator. Input geometry is tessellated according to the specified tessellation level. For instance, an incoming triangle will generate concentric smaller triangles inside the original. Then the space between each concentric triangle is divided into smaller triangles. Quads are handled similarly.
Evaluating Tessellation Results
Evaluation shaders are required for tessellation to function. This shader runs on a per vertex basis on each vertex in a patch, but can determine how many vertices are in the patch and what the primitive ID for the vertex is. Additionally, the evaluation shader can also access the location of the new vertex within the patch as well as the tessellation levels. When complete, the evaluation shader outputs vertex attributes to be processed by the subsequent pipeline shaders.
Tessellation shaders are a great tool for enhancing geometry while incurring little additional overhead. They can significantly augment scene geometry while not affecting system bandwidth requirements or requiring significant increases in geometry storage. For more details, you can take a look at the OpenGL 4.0 specification or the ARB_tessellation_shader extension.
The newest version of OpenGL introduces many significant enhancements to 3D graphics. But one of the most interesting additions for OpenGL 4.0 is tessellation. Tessellation shaders can take incoming geometry and generate more detailed geometry on the GPU.
This feature gives applications dynamic control over how much detail to render geometry in at run-time using a single data set. The end result is less vertex data, much faster throughput and enhanced geometry detail.
Stay tuned for more details on tessellation coming soon.
This definitely brings into question Intel’s ability to compete in the high performance graphics market. It will also affect the next generation of gaming consoles. Only time will tell. But right now the latest generation of ATI graphics cards remain the only DX11 capable chips.
Even if Larrabee had made it to market, there is no telling how easy it would be to fully utilize. After all, we just talked about how IBM just canceled the Cell processor less than 2 weeks ago. These types of parallel architectures have proven challenging for many years. One of the best attributes of the OpenGL and OpenCL APIs is that developers don’t have to deal with how to best utilize the massively parallel architecture, the hardware (and drivers) do that.
OpenGL started as an API that could only be supported on the most powerful of SGI workstations. How far we have come. The Khronos group has been working on a new 3D standard for web browsers called WebGL. This standard is based on OpenGL ES 2.0 and allows web developers to create interactive 3D content for any browser supporting the standard. So far Mozilla,Safari and possibly Chrome are on board. No word on stagnant Microsoft products.
This has the potential to really open up how we see and use the web. 3D games seem like the most obvious first step. But no reason to stop there. Google Earth already has the ability to render in OpenGL, no more need to download and install an application. Maps are an obvious use for 3D. But shopping and viewing products in 3D isn’t a far stretch. Kudos to the WebGL group for getting something substantial out so quickly.
Either way, the software world may not be ready for hybrid architectures like the Cell. Software has been written linearly for as long as software has existed. The tools do not yet exist for breaking traditional code into efficiently parallel-izable chunks. The architecture of Larrabee isn’t drastically different. Time will tell if developers will be able to make efficient use of this mixed architecture.
GPGPU technologies come at this problem from a different angle. OpenGL and OpenCL provide a single interface capable of addressing hundreds or thousands of cores. For developers, there is a clear distinction between running linearily and breaking data processing up in a massively parallel environment.
It’s exciting to see innovations in the compute space. It’s been many years since truly innovative designs and architectures have had a meaningful impact on how processing chips work.
The newspaper industry, much like the rest of traditional media, is in a state of turmoil. Revenue is down, and has been for a while. The New York Times may seem to be the worst hit, if not simply because of name prestige. Jobs are being lost. Rupert Murdoch plans to save his expensive news media empire by charging for online content. The entire Murdoch empire is at risk. But it is hard to change the business model once the cat is out of the bag. Web user switching costs are low, if readers cannot get free news at NYT.com, they will get it elsewhere.
Are newspapers and traditional media becoming more like the fine arts? Fine arts have long been supported by indirect revenue streams such as government grants, endowments, private fund raising. It isn’t unusual for a fine arts institute to have real revenue of 30% of operating costs. But if news media cannot support their own costs, where will alternate funding come from? Endowments and private funding are not an option. I personally enjoy fine arts, but one must ask “If they cannot support their own costs, should they continue to operate?” What about traditional media?
OpenGL, Graphics, Technology, Web, Startups, and Education.
Where will the next big tech breakthrough come from? Which career choices will be the most beneficial? Things change so fast it is impossible to plan with certainty. This blog explores where we are, where we have come from, and what the future may hold.
Nick Haemel, a computer engineer, [...]more →