Point of View: Are you guilty of piracy by association?

Note: I particular liked this PoV because it explains the other repercussions of the SOPA and MegaUpload take down. After all it is about precedence sometimes. And how sometimes good intentions can lead to bad repercussions.

 

They say birds of a feather flock together. Does that mean that if you happen to use the same cloud storage and file sharing service that’s also used by people who violate the law, you should be punished, too?  Some of the folks who had perfectly legal files stored on Megaupload.com must have felt as if they were being found guilty by association when their data was seized last week along with that of copyright violators.
The site was shut down by the U.S. government and its founder was arrested in New Zealand, with the FBI calling this one of the largest criminal copyright cases ever brought by the United States." I still remember back when copyright violation was a civil matter, not a criminal offense. If I copyrighted my work and you used it without my permission, I had to take you to civil court and sue you. Then if you were found liable, the court ordered you to pay me monetary damages and/or to stop using my work. Today, though, government has gone wild, criminalizing almost every "bad act." Remember the old saying, "it’s not a federal crime." Well, now it probably is.
This is a scary precedent, in more ways than one. If someone stores illegal material (child porn, for instance) on his or her SkyDrive account, are my documents and the photos of my dogs that I have stored with that service subject to government seizure?  Even worse, are Bill Gates (founder) and Steve Ballmer (CEO) of Microsoft going to be arrested for letting it happen? That may sound extreme, but the way things are going, it’s not unthinkable. There are already extant laws that hold a bartender criminally responsible if someone has too much to drink in his/her establishment and then gets behind the wheel of a car and kills someone. And I can guarantee I’ll get feedback from readers who think that is fair and right.
It all seems to be part of a broader legal trend that seeks to be "proactive" and outlaw not just the commission of wrongful acts, but also the use of anything that might possibly ever be used to commit wrongful acts. It’s like making it illegal to own a telephone because it could be used to place harassing or obscene phone calls, or making it illegal to own a gun because it might be used to commit a robbery – oh, wait; some jurisdictions do that, don’t they?
ComputerWorld says the moral of the Megaupload story is that we should be careful about what cloud services we use to store our stuff, and while that’s true, I think it misses the bigger picture. Something’s happening here and there are too many "Mr. Joneses" who don’t know what it is (let’s see how many of you are old enough to recognize that reference).
It’s easy to be cynical and say there’s nothing we can do about it. It’s the federal government, after all – they can do whatever they want. They have the superior firepower. But not everyone was quite so accepting of that idea. The "hackavist" group Anonymous responded to the shutdown of Megaupload with a series of Distributed Denial of Service (DDoS) attacks against the web sites of the Department of Justice, the FBI, the U.S. Copyright Office, the RIAA and the MPAA.
Shortly before all this, an Internet-wide protest against two anti-piracy bills in Congress, SOPA and PIPA (reported in last week’s newsletter) resulted in the withdrawal of legislation by its sponsor.  Obviously it’s possible for online activists to exert influence in top political circles, even if the music and movie industry lobbies do have more money.
We may be witnessing the declaration of a new kind of war here. It’s going to be interesting to watch how it unfolds. Share your thoughts and opinions on this our forum  or email me.

From WinNews newsletter (Sorry, no direct link this time).

Did you lose access to data in MegaUpload?

Advertisements

Office solution: How to quickly add numbers in Word without a table

From TechRepublic

Last week, we were looking for a quick and easy way to add values in a Word document, without resorting to a table and table formulas. It would be nice if Word displayed the sum in the Status bar, similar to Excel. Well, it does!

Msphoto was the first to mention the Calculate command, which is the solution I had in mind. It isn’t readily available, so some users don’t know about it. You can use Calculate to sum a series of values when you don’t need a more complicated solution, such as a table or linking to Excel. Fortunately, it’s easy to add the command to the QAT (or Quick Access Toolbar):

  1. Choose More Commands QAT dropdown.
  2. In the resulting dialog, choose All Commands from the Choose Commands From dropdown.
  3. Select Calculate from the resulting list.
  4. Click Add and then click OK. Word will add the command to the QAT.

If you’re using Word 2003, do the following to add the Tools Calculate command:

  1. Choose Customize from the Tools menu.
  2. Click the Commands tab and choose All Commands from the Categories list.
  3. Choose ToolsCalculate from the Commands list.
  4. Drag ToolsCalculate to the toolbar.

To use the Calculate (Tools Calculate) command, you’ll need a series of numbers. Simply separate values with a comma, select the values, and then click Calculate to display their sum in the Status bar (which temporarily usurps the other indicators). You can also press [Ctrl]+V to paste the sum into your document.

Building the next generation file system for Windows: ReFS

Today I read this blog article from MSDN.

It talks about the File System ReFS in Windows 8. It is great to read at least the first part to understand what is coming in the new version of Windows. Most of these changes will be a big win for business where data needs to be more reliable and hot (always on), but are still great for home to keep our music, pictures and game saves Smile.

The article is very technical, getting more technical after  the section “Key design attributes and features” but at minimum is good to read the first part (5 minutes), and the Q&A at the end.

 

We wanted to continue our dialog about data storage by talking about the next generation file system being introduced in Windows 8.  Today, NTFS is the most widely used, advanced, and feature rich file system in broad use. But when you’re reimagining Windows, as we are for Windows 8, we don’t rest on past successes, and so with Windows 8 we are also introducing a newly engineered file system. ReFS, (which stands for Resilient File System), is built on the foundations of NTFS, so it maintains crucial compatibility while at the same time it has been architected and engineered for a new generation of storage technologies and scenarios. In Windows 8, ReFS will be introduced only as part of Windows Server 8, which is the same approach we have used for each and every file system introduction. Of course at the application level, ReFS stored data will be accessible from clients just as NTFS data would be. As you read this, let’s not forget that NTFS is by far the industry’s leading technology for file systems on PCs.

This detailed architectural post was authored by Surendra Verma, a development manager on our Storage and File System team, though, as with every feature, a lot of folks contributed. We have also used the FAQ approach again in this post.
–Steven

PS: Don’t forget to track us on @buildwindows8 where we were providing some updates from CES.


In this blog post I’d like to talk about a new file system for Windows. This file system, which we call ReFS, has been designed from the ground up to meet a broad set of customer requirements, both today’s and tomorrow’s, for all the different ways that Windows is deployed.

The key goals of ReFS are:

  • Maintain a high degree of compatibility with a subset of NTFS features that are widely adopted while deprecating others that provide limited value at the cost of system complexity and footprint.
  • Verify and auto-correct data. Data can get corrupted due to a number of reasons and therefore must be verified and, when possible, corrected automatically. Metadata must not be written in place to avoid the possibility of “torn writes,” which we will talk about in more detail below.
  • Optimize for extreme scale. Use scalable structures for everything. Don’t assume that disk-checking algorithms, in particular, can scale to the size of the entire file system.
  • Never take the file system offline. Assume that in the event of corruptions, it is advantageous to isolate the fault while allowing access to the rest of the volume. This is done while salvaging the maximum amount of data possible, all done live.
  • Provide a full end-to-end resiliency architecture when used in conjunction with the Storage Spaces feature, which was co-designed and built in conjunction with ReFS.

The key features of ReFS are as follows (note that some of these features are provided in conjunction with Storage Spaces).

  • Metadata integrity with checksums
  • Integrity streams providing optional user data integrity
  • Allocate on write transactional model for robust disk updates (also known as copy on write)
  • Large volume, file and directory sizes
  • Storage pooling and virtualization makes file system creation and management easy
  • Data striping for performance (bandwidth can be managed) and redundancy for fault tolerance
  • Disk scrubbing for protection against latent disk errors
  • Resiliency to corruptions with "salvage" for maximum volume availability in all cases
  • Shared storage pools across machines for additional failure tolerance and load balancing

In addition, ReFS inherits the features and semantics from NTFS including BitLocker encryption, access-control lists for security, USN journal, change notifications, symbolic links, junction points, mount points, reparse points, volume snapshots, file IDs, and oplocks.

And of course, data stored on ReFS is accessible through the same file access APIs on clients that are used on any operating system that can access today’s NTFS volumes.

Key design attributes and features

Our design attributes are closely related to our goals. As we go through these attributes, keep in mind the history of producing file systems used by hundreds of millions of devices scaling from the smallest footprint machines to the largest data centers, from the smallest storage format to the largest multi-spindle format, from solid state storage to the largest drives and storage systems available. Yet at the same time, Windows file systems are accessed by the widest array of application and system software anywhere. ReFS takes that learning and builds on it. We didn’t start from scratch, but reimagined it where it made sense and built on the right parts of NTFS where that made sense. Above all, we are delivering this in a pragmatic manner consistent with the delivery of a major file system—something only Microsoft has done at this scale.

Code reuse and compatibility

When we look at the file system API, this is the area where compatibility is the most critical and technically, the most challenging. Rewriting the code that implements file system semantics would not lead to the right level of compatibility and the issues introduced would be highly dependent on application code, call timing, and hardware. Therefore in building ReFS, we reused the code responsible for implementing the Windows file system semantics. This code implements the file system interface (read, write, open, close, change notification, etc.), maintains in-memory file and volume state, enforces security, and maintains memory caching and synchronization for file data. This reuse ensures a high degree of compatibility with the features of NTFS that we’re carrying forward.

Underneath this reused portion, the NTFS version of the code-base uses a newly architected engine that implements on-disk structures such as the Master File Table (MFT) to represent files and directories. ReFS combines this reused code with a brand-new engine, where a significant portion of the innovation behind ReFS lies. Graphically, it looks like this:

NTFS.SYS = NTFS upper layer API/semantics engine / NTFS on-disk store engine; ReFS.SYS = Upper layer engine inherited from NTFS / New on-disk store engine

Reliable and scalable on-disk structures

On-disk structures and their manipulation are handled by the on-disk storage engine. This exposes a generic key-value interface, which the layer above leverages to implement files, directories, etc. For its own implementation, the storage engine uses B+ trees exclusively. In fact, we utilize B+ trees as the single common on-disk structure to represent all information on the disk. Trees can be embedded within other trees (a child tree’s root is stored within the row of a parent tree). On the disk, trees can be very large and multi-level or really compact with just a few keys and embedded in another structure. This ensures extreme scalability up and down for all aspects of the file system. Having a single structure significantly simplifies the system and reduces code. The new engine interface includes the notion of “tables” that are enumerable sets of key-value pairs. Most tables have a unique ID (called the object ID) by which they can be referenced. A special object table indexes all such tables in the system.

Now, let’s look at how the common file system abstractions are constructed using tables.

Object table: Object ID, Disk Offset & Checksum. Arrow to Directory: File name, File metadata; File Metadata: Key, Value; File extents: 0-07894, Disk offset and checksums; 7895-10000, Disk offset and checksums; 10001-57742, Disk offset and checksums; 57743-9002722, Disk offset and checksums

File structures

As shown in the diagram above, directories are represented as tables. Because we implement tables using B+ trees, directories can scale efficiently, becoming very large. Files are implemented as tables embedded within a row of the parent directory, itself a table (represented as File Metadata in the diagram above). The rows within the File Metadata table represent the various file attributes. The file data extent locations are represented by an embedded stream table, which is a table of offset mappings (and, optionally, checksums). This means that the files and directories can be very large without a performance impact, eclipsing the limitations found in NTFS.

As expected, other global structures within the file system such ACLs (Access Control Lists) are represented as tables rooted within the object table.

All disk space allocation is managed by a hierarchical allocator, which represents free space by tables of free space ranges. For scalability, there are three such tables – the large, medium and small allocators. These differ in the granularity of space they manage: for example, a medium allocator manages medium-sized chunks allocated from the large allocator. This makes disk allocation algorithms scale very well, and allows us the benefit of naturally collocating related metadata for better performance. The roots of these allocators as well as that of the object table are reachable from a well-known location on the disk. Some tables have allocators that are private to them, reducing contention and encouraging better allocation locality.

Apart from global system metadata tables, the entries in the object table refer to directories, since files are embedded within directories.

Robust disk update strategy

Updating the disk reliably and efficiently is one of the most important and challenging aspects of a file system design. We spent a lot of time evaluating various approaches. One of the approaches we considered and rejected was to implement a log structured file system. This approach is unsuitable for the type of general-purpose file system required by Windows. NTFS relies on a journal of transactions to ensure consistency on the disk. That approach updates metadata in-place on the disk and uses a journal on the side to keep track of changes that can be rolled back on errors and during recovery from a power loss. One of the benefits of this approach is that it maintains the metadata layout in place, which can be advantageous for read performance. The main disadvantages of a journaling system are that writes can get randomized and, more importantly, the act of updating the disk can corrupt previously written metadata if power is lost at the time of the write, a problem commonly known as torn write.

To maximize reliability and eliminate torn writes, we chose an allocate-on-write approach that never updates metadata in-place, but rather writes it to a different location in an atomic fashion. In some ways this borrows from a very old notion of “shadow paging” that is used to reliably update structures on the disk. Transactions are built on top of this allocate-on-write approach. Since the upper layer of ReFS is derived from NTFS, the new transaction model seamlessly leverages failure recovery logic already present, which has been tested and stabilized over many releases.

ReFS allocates metadata in a way that allows writes to be combined for related parts (for example, stream allocation, file attributes, file names, and directory pages) in fewer, larger I/Os, which is great for both spinning media and flash. At the same time a measure of read contiguity is maintained. The hierarchical allocation scheme is leveraged heavily here.

We perform significant testing where power is withdrawn from the system while the system is under extreme stress, and once the system is back up, all structures are examined for correctness. This testing is the ultimate measure of our success. We have achieved an unprecedented level of robustness in this test for Microsoft file systems. We believe this is industry-leading and fulfills our key design goals.

Resiliency to disk corruptions

As mentioned previously, one of our design goals was to detect and correct corruption. This not only ensures data integrity, but also improves system availability and online operation. Thus, all ReFS metadata is check-summed at the level of a B+ tree page, and the checksum is stored independently from the page itself. This allows us to detect all forms of disk corruption, including lost and misdirected writes and bit rot (degradation of data on the media). In addition, we have added an option where the contents of a file are check-summed as well. When this option, known as “integrity streams,” is enabled, ReFS always writes the file changes to a location different from the original one. This allocate-on-write technique ensures that pre-existing data is not lost due to the new write. The checksum update is done atomically with the data write, so that if power is lost during the write, we always have a consistently verifiable version of the file available whereby corruptions can be detected authoritatively.

We blogged about Storage Spaces a couple of weeks ago. We designed ReFS and Storage Spaces to complement each other, as two components of a complete storage system. We are making Storage Spaces available for NTFS (and client PCs) because there is great utility in that; the architectural layering supports this client-side approach while we adapt ReFS for usage on clients so that ultimately you’ll be able to use ReFS across both clients and servers.

In addition to improved performance, Storage Spaces protects data from partial and complete disk failures by maintaining copies on multiple disks. On read failures, Storage Spaces is able to read alternate copies, and on write failures (as well as complete media loss on read/write) it is able to reallocate data transparently. Many failures don’t involve media failure, but happen due to data corruptions, or lost and misdirected writes.

These are exactly the failures that ReFS can detect using checksums. Once ReFS detects such a failure, it interfaces with Storage Spaces to read all available copies of data and chooses the correct one based on checksum validation. It then tells Storage Spaces to fix the bad copies based on the good copies. All of this happens transparently from the point of view of the application. If ReFS is not running on top of a mirrored Storage Space, then it has no means to automatically repair the corruption. In that case it will simply log an event indicating that corruption was detected and fail the read if it is for file data. I’ll talk more about the impact of this on metadata later.

Checksums (64-bit) are always turned on for ReFS metadata, and assuming that the volume is hosted on a mirrored Storage Space, automatic correction is also always turned on. All integrity streams (see below) are protected in the same way. This creates an end-to-end high integrity solution for the customer, where relatively unreliable storage can be made highly reliable.

Integrity streams

Integrity streams protect file content against all forms of data corruption. Although this feature is valuable for many scenarios, it is not appropriate for some. For example, some applications prefer to manage their file storage carefully and rely on a particular file layout on the disk. Since integrity streams reallocate blocks every time file content is changed, the file layout is too unpredictable for these applications. Database systems are excellent examples of this. Such applications also typically maintain their own checksums of file content and are able to verify and correct data by direct interaction with Storage Spaces APIs.

For those cases where a particular file layout is required, we provide mechanisms and APIs to control this setting at various levels of granularity.

At the most basic level, integrity is an attribute of a file (FILE_ATTRIBUTE_INTEGRITY_STREAM). It is also an attribute of a directory. When present in a directory, it is inherited by all files and directories created inside the directory. For convenience, you can use the “format” command to specify this for the root directory of a volume at format time. Setting it on the root ensures that it propagates by default to every file and directory on the volume. For example:

D:\>format /fs:refs /q /i:enable <volume>

D:\>format /fs:refs /q /i:disable <volume>

By default, when the /i switch is not specified, the behavior that the system chooses depends on whether the volume resides on a mirrored space. On a mirrored space, integrity is enabled because we expect the benefits to significantly outweigh the costs. Applications can always override this programmatically for individual files.

Battling “bit rot”

As we described earlier, the combination of ReFS and Storage Spaces provides a high degree of data resiliency in the presence of disk corruptions and storage failures. A form of data loss that is harder to detect and deal with happens due to ­­­“bit rot,” where parts of the disk develop corruptions over time that go largely undetected since those parts are not read frequently. By the time they are read and detected, the alternate copies may have also been corrupted or lost due to other failures.

In order to deal with bit rot, we have added a system task that periodically scrubs all metadata and Integrity Stream data on a ReFS volume residing on a mirrored Storage Space. Scrubbing involves reading all the redundant copies and validating their correctness using the ReFS checksums. If checksums mismatch, bad copies are fixed using good ones.

The file attribute FILE_ATTRIBUTE_NO_SCRUB_DATA indicates that the scrubber should skip the file. This attribute is useful for those applications that maintain their own integrity information, when the application developer wants tighter control over when and how those files are scrubbed.

The Integrity.exe command line tool is a powerful way to manage the integrity and scrubbing policies.

When all else fails…continued volume availability

We expect many customers to use ReFS in conjunction with mirrored Storage Spaces, in which case corruptions will be automatically and transparently fixed. But there are cases, admittedly rare, when even a volume on a mirrored space can get corrupted – for example faulty system memory can corrupt data, which can then find its way to the disk and corrupt all redundant copies. In addition, some customers may not choose to use a mirrored storage space underneath ReFS.

For these cases where the volume gets corrupted, ReFS implements “salvage,” a feature that removes the corrupt data from the namespace on a live volume. The intention behind this feature is to ensure that non-repairable corruption does not adversely affect the availability of good data. If, for example, a single file in a directory were to become corrupt and could not be automatically repaired, ReFS will remove that file from the file system namespace while salvaging the rest of the volume. This operation can typically be completed in under a second.

Normally, the file system cannot open or delete a corrupt file, making it impossible for an administrator to respond. But because ReFS can still salvage the corrupt data, the administrator is able to recover that file from a backup or have the application re-create it without taking the file system offline. This key innovation ensures that we do not need to run an expensive offline disk checking and correcting tool, and allows for very large data volumes to be deployed without risking large offline periods due to corruption.

A clean fit into the Windows storage stack

We knew we had to design for maximum flexibility and compatibility. We designed ReFS to plug into the storage stack just like another file system, to maximize compatibility with the other layers around it. For example, it can seamlessly leverage BitLocker encryption, Access Control Lists for security, USN journal, change notifications, symbolic links, junction points, mount points, reparse points, volume snapshots, file IDs, and oplocks. We expect most file system filters to work seamlessly with ReFS with little or no modification. Our testing bore this out; for example, we were able to validate the functionality of the existing Forefront antivirus solution.

Some filters that depend on the NTFS physical format will need greater modification. We run an extensive compatibility program where we test our file systems with third-party antivirus, backup, and other such software. We are doing the same with ReFS and will work with our key partners to address any incompatibilities that we discover. This is something we have done before and is not unique to ReFS.

An aspect of flexibility worth noting is that although ReFS and Storage Spaces work well together, they are designed to run independently of each other. This provides maximum deployment flexibility for both components without unnecessarily limiting each other. Or said another way, there are reliability and performance tradeoffs that can be made in choosing a complete storage solution, including deploying ReFS with underlying storage from our partners.

With Storage Spaces, a storage pool can be shared by multiple machines and the virtual disks can seamlessly transition between them, providing additional resiliency to failures. Because of the way we have architected the system, ReFS can seamlessly take advantage of this.

Usage

We have tested ReFS using a sophisticated and vast set of tens of thousands of tests that have been developed over two decades for NTFS. These tests simulate and exceed the requirements of the deployments we expect in terms of stress on the system, failures such as power loss, scalability, and performance. Therefore, ReFS is ready to be deployment-tested in a managed environment. Being the first version of a major file system, we do suggest just a bit of caution. We do not characterize ReFS in Windows 8 as a “beta” feature. It will be a production-ready release when Windows 8 comes out of beta, with the caveat that nothing is more important than the reliability of data. So, unlike any other aspect of a system, this is one where a conservative approach to initial deployment and testing is mandatory.

With this in mind, we will implement ReFS in a staged evolution of the feature: first as a storage system for Windows Server, then as storage for clients, and then ultimately as a boot volume. This is the same approach we have used with new file systems in the past.

Initially, our primary test focus will be running ReFS as a file server. We expect customers to benefit from using it as a file server, especially on a mirrored Storage Space. We also plan to work with our storage partners to integrate it with their storage solutions.

Conclusion

Along with Storage Spaces, ReFS forms the foundation of storage on Windows for the next decade or more. We believe this significantly advances our state of the art for storage. Together, Storage Spaces and ReFS have been architected with headroom to innovate further, and we expect that we will see ReFS as the next massively deployed file system.

— Surendra

FAQ:

Q) Why is it named ReFS?

ReFS stands for Resilient File System. Although it is designed to be better in many dimensions, resiliency stands out as one of its most prominent features.

Q) What are the capacity limits of ReFS?

The table below shows the capacity limits of the on-disk format. Other concerns may determine some practical limits, such as the system configuration (for example, the amount of memory), limits set by various system components, as well as time taken to populate data sets, backup times, etc.

Attribute

Limit based on the on-disk format

Maximum size of a single file

2^64-1 bytes

Maximum size of a single volume

Format supports 2^78 bytes with 16KB cluster size (2^64 * 16 * 2^10). Windows stack addressing allows 2^64 bytes

Maximum number of files in a directory

2^64

Maximum number of directories in a volume

2^64

Maximum file name length

32K unicode characters

Maximum path length

32K

Maximum size of any storage pool

4 PB

Maximum number of storage pools in a system

No limit

Maximum number of spaces in a storage pool

No limit



Q) Can I convert data between NTFS and ReFS?

In Windows 8 there is no way to convert data in place. Data can be copied. This was an intentional design decision given the size of data sets that we see today and how impractical it would be to do this conversion in place, in addition to the likely change in architected approach before and after conversion.

Q) Can I boot from ReFS in Windows Server 8?

No, this is not implemented or supported.

Q) Can ReFS be used on removable media or drives?

No, this is not implemented or supported.

Q) What semantics or features of NTFS are no longer supported on ReFS?

The NTFS features we have chosen to not support in ReFS are: named streams, object IDs, short names, compression, file level encryption (EFS), user data transactions, sparse, hard-links, extended attributes, and quotas.

Q) What about parity spaces and ReFS?

ReFS is supported on the fault resiliency options provided by Storage Spaces. In Windows Server 8, automatic data correction is implemented for mirrored spaces only.

Q) Is clustering supported?

Failover clustering is supported, whereby individual volumes can failover across machines. In addition, shared storage pools in a cluster are supported.

Q) What about RAID? How do I use ReFS capabilities of striping, mirroring, or other forms of RAID? Does ReFS deliver the read performance needed for video, for example?

ReFS leverages the data redundancy capabilities of Storage Spaces, which include striped mirrors and parity. The read performance of ReFS is expected to be similar to that of NTFS, with which it shares a lot of the relevant code. It will be great at streaming data.

Q) How come ReFS does not have deduplication, second level caching between DRAM & storage, and writable snapshots?

ReFS does not itself offer deduplication. One side effect of its familiar, pluggable, file system architecture is that other deduplication products will be able to plug into ReFS the same way they do with NTFS.

ReFS does not explicitly implement a second-level cache, but customers can use third-party solutions for this.

ReFS and VSS work together to provide snapshots in a manner consistent with NTFS in Windows environments. For now, they don’t support writable snapshots or snapshots larger than 64TB.

Ten Tips For Protecting Your Devices From Seizure By U.S. Customs

With U.S. Customs agents increasingly interested in the contents of digital devices like iPhones, iPads and laptops, The Electronic Frontier Foundation has issued guidance for getting your mobile device across the border safely and protecting the data on it should it get seized.

The Fourth Amendment to the U.S. Constitution protects American citizens from unreasonable search and seizure – a fundamental Constitutional right that courts have interpreted as encompassing not just our bodies, but our stuff: homes, cars and these days, our electronic devices. But the 4th Amendment doesn’t extend to U.S. border crossings, where courts agree that the government has the legal authority to seize and search your car and devices, even when there’s no suspicion of wrongdoing. The Electronic Frontier Foundation has put together a guide (.PDF) for would-be border crossers to protect their devices from seizure and protect the data they contain in the event that U.S. Customs decides to take a closer look. Here’s a look at some of their tips from “Defending Privacy at the U.S. Border.”

Continue at Source

Note: The PDF linked has more specific. Threat Post basically simplified the guidelines and put the top 10.

https://www.eff.org/sites/default/files/EFF-border-search_2.pdf

It is also good to note, that you should be careful regardless of what data you carry. Imagine traveling with a PowerPoint that you need for an important client meeting, or a wedding video that you wanted to show family, only to have to leave your digital gadgets at the border. Having that information backed and a second place will save your day and tears.

Slammed And Blasted A Decade Ago, Microsoft Got Serious About Security

This article is a little longer than usual, however it does a great show to show how security in Windows systems have improved, and why it is important. Sometimes, things we take for granted had a beginning, right?

From Threat Post

A decade ago this week, Chairman Bill Gates kicked off the Trustworthy Computing Initiative at Microsoft with a company-wide memo. The echoes of that memo still resonate throughout the software industry today as other firms, from Apple to Adobe, and Oracle to Google have followed the path that Microsoft blazed over the past ten years.

But the Trustworthy Computing Initiative, which made terms like secure development lifecycle (SDL), automated patching, and “responsible disclosure” part of the IT community’s common parlance, was no stroke of genius from the visionary Gates. Nor did the plan spring, like Athena, fully formed from the CEO’s forehead. In fact, Trustworthy Computing owes its existence as much to four pieces of virulent malware as it does to Bill Gates’ vision and market savvy. This is the story of how worms drove one of the biggest transformations in the history of the technology industry.

“Not just a marketing problem”

In 2001, there was no Microsoft Security Response Center. The Windows Update service did not exist. Security bulletins were rudimentary, at best, and Windows XP had no default firewall.

For much of the past two years, the most prevalent online threat came in the form of mass-mailing computer viruses that used macros to cull contact information from infected computers. Each infection yielded a bunch of new contacts and the next batch of potential victims. The prominent threats of this generation – mass mailing viruses like Melissa and LoveLetter spawned some security changes from Microsoft. But the changes were iterative – Band Aids on an obvious problem – not efforts at better or more secure product design.

The abrupt arrival of the Code Red worm in June of that year turned conventional thinking about the dangers of Internet borne threats – and how to handle them – on its head. The worm, like many that would come after it, used a software vulnerability in a common Microsoft platform and a slow response to the disclosure of that vulnerability to devastating effect.

In June 2001, Microsoft released an advisory and patch for its Internet Information Server, warning of security vulnerability in how it handled certain requests. Security firm eEye Digital Security had found the vulnerability and warned Microsoft of the issues. Microsoft quickly addressed the problem, but with little impact: customers had neither the tools nor the incentive to patch the flaw, recalls Marc Maiffret, chief technology officer of eEye.

"Microsoft was responsive, but they were trying to figure out how to handle security and to not just keep thinking of these issues as marketing problems," Maiffret says.

Less than a month later, Code Red arrived, exploiting that same vulnerability to spread from Web server to Web server. Maiffret and his team analyzed the code and named the worm after the variant of Mountain Dew they had constantly quaffed during the analysis. Nearly a half million servers were infected by the attack, according to estimates at the time. He recalls being surprised by the damage and disruption Code Red caused, both to customers and to the software industry, itself.

"We understood the threat technically, but did not understand the impact it would have on the industry and the security landscape," says Maiffret.

If Microsoft was not convinced that its products needed a security revamp, the Nimda virus, which started spreading just weeks later, in August 2001, nailed the message home. Nimda was dubbed a “blended” threat, because it used multiple techniques to spread, including by e-mail, open network shares on infected networks, Web pages and via direct attacks on vulnerable IIS installations. Nimda didn’t propagate as quickly as Code Red, but it was difficult to eradicate from affected networks. That meant more and longer support calls for Microsoft and more expensive remediation.

By the end of 2001, Microsoft was feeling the pressure from irate customers and from an increasingly attentive media, which lambasted the company for prioritizing features over underlying security. By the end of the year, the company and its leader realized that it needed to start anew. Gates’ Trustworthy Computing Initiative e-mail would appear just two weeks into the New Year, 2002.

“We stopped writing code.”

On Thursday, January 23, 2003, Tim Rains moved from Microsoft’s network support team and began his first day as part of the company’s incident response group. The engineer did not have much time to acclimate to his new position: Within 48 hours, the Slammer worm hit, compromising hundreds of thousands of servers and inundating Rains’ group with support calls.

The virulent worm spread between systems running Microsoft’s SQL Server as well as applications that used embedded versions of the software, exploiting a flaw that had been patched six months earlier. The threat moved fast, earning the title of the world’s first flash worm: The program — 376 bytes of computer code —spread to 90 percent of all vulnerable servers in the first 10 minutes, according to a report by security researchers and academic computer scientists.

By Saturday, Rains and the security team were buried under and avalanche of support calls. Microsoft halted its regular work and conscripted much of the company’s programming staff to help respond to the threat.

"It really stands out how Microsoft mobilized," Rains says. "We stopped writing code, and programmers came over to call centers that we had. I remember being in large rooms and training people to help customers."

For Microsoft in 2003, Slammer was a reminder that the company still had a long way to go if it wanted to see its nascent Trustworthy Computing effort bear fruit. In the year since Gate’s memo was sent, the software maker had pushed through major changes to its software development process.

Following the Code Red and Nimda worms, Microsoft had changed course: focusing on securing its products and making them easier for customers to secure and created the Strategic Technology Protection Program in October 2001.

But helping users secure the company’s difficult-to-secure products was not enough. Microsoft also had to change an internal development culture that prioritized features over security.

Announcing the Trustworthy Computing Initiative in January, Gates said: "When we face a choice between adding features and resolving security issues, we need to choose security. Our products should emphasize security right out of the box."

In the following 12 months, the company halted much of its product development, trained nearly 8,500 developers in secure programming, and then put them to work reviewing Windows code for security errors. The total tally of the effort: About $100 million, according to Microsoft’s estimates.

But Microsoft still needed more time and effort to improve its software. The very same Thursday that Rains began in the security incident response center, Gates sent out a company-wide e-mail celebrating Trustworthy Computing’s first birthday, and highlighting how far the company’s engineers had to go to secure its products.

"While we’ve accomplished a lot in the past year, there is still more to do–at Microsoft and across our industry," Gates wrote.

The Slammer worm attack just days later was a timely reminder to Microsoft of its failings and a convincing argument for why it had to continue on its costly crusade, in particular in cajoling its massive customer base to apply the security fixes that it issued.

SQL Slammer was based on proof of concept code privately disclosed to the company by UK security researcher David Litchfield months before, and quickly patched by the company. A demo of an exploit for the hole at the Black Hat Briefings in the Summer of 2003 also raised the profile of the SQL vulnerability, but to no avail: few SQL Server users had applied the company’s patch by the time January rolled around (Litchfield estimated fewer than 1 in 10 had been patched prior to the release of Slammer). Once the SQL Slammer worm began jumping from SQL Server installation to SQL server installation, circling the globe in just minutes, there was little time to patch.

Slammer, like its predecessors, forced still more radical changes in Microsoft’s corporate culture and procedures. Development of Yukon (SQL Server 2005) was put on hold and the company’s entire SQL team went back over codebases from Yukon back to SQL Server 2000 to look for flaws. As Litchfield wrote in a Threatpost editorial: the effort, though costly, paid dividends:

“The first major flaw to be found in SQL Server 2005 came over 3 years after its release… So far SQL Server 2008 has had zero issues. Not bad at all for a company long considered the whipping boy of the security world.”

Slammer also prompted big changes in the area of patch and update distribution. Microsoft simplified its update infrastructure and made efforts to improve patches, and embarked on a number of information sharing efforts with the security community.

“A turning point”

The MSBlast or Blaster worm, which started spreading in August 2003, perhaps had the greatest impact on Microsoft’s secure development efforts, however.

The worm took advantage of vulnerability in Windows XP remote procedure call (RPC) functionality, which security professionals at the time called the most widespread flaw ever. In its first few months, the worm infected about 10 million PCs, according to Microsoft data. Eighteen months later, the software giant had updated the figure to more than 25 million.

"It was the turning point for us," says Microsoft’s Rains. "We had already started getting serious because of SQL Slammer, but Blaster was the one that really galvanized the entire company."

Two months after the Blaster worm started spreading, Microsoft changed the focus of its second service pack for Windows XP, targeting the entire update on improving the security of users’ systems. In addition, the company kicked off a campaign to educate users and created its bounty program for information leading to the arrest of the perpetrators behind Blaster and the Sobig virus.

While the changes were painful, the results have been overall positive, say security professionals.

"Sadly, the only time when technology companies do things to improve security is when they have enough black eyes," says eEye Maiffret. "That’s what happened with Microsoft."

Other companies and their products are now undergoing the same scrutiny by attackers. Hopefully, they will learn the same lessons.

Recommended Reads

New Year, New You – Managing and sharing your photos

I am trying to clean up my PC at home a little and organize pictures, so I did a little search and I found this.

While not exhausting it has great tips to accomplish things quickly and we like that Smile

From the Windows Experience blog

I have over a decade’s worth of digital photos on my PC. I love taking photos. I love capturing the moment where ever that might be – with friends and family over the holidays or out in the middle of no-where in eastern Washington State. Most people have tons of digital photos on their PC scattered around – and Windows Live Photo Gallery makes it super easy to manage those photos, edit them, and then share them out to anyone you want. In a post earlier today, Kristina shared 3 super easy tips for shaping up your technology habits in 2012. She included a tip on the batch people tag feature in Photo Gallery. I thought I would share a few more tips specific to Photo Gallery that will help you better manage your photos in 2012!

Tip #1: Organizing

By default, whenever you import photos into Photo Gallery from your camera, it puts the photos in your “My Pictures” folder under your user profile in Windows. I like to keep ALL my photos in this folder. And Photo Gallery makes managing this folder of your photos super easy – allowing you to create sub-folders and to drag and drop your photos into any folder you like. You can organize your photos exactly the way you want them to be organized on your PC. By default, when you import photos from a camera in Photo Gallery, it creates a folder with whatever you name the photos you are importing. For example: if you name the photos you are importing “Beach”, a folder called “Beach” is created with those photos inside. For a lot of folks, this might work out fine. But for me, it doesn’t. For me – I like to have everything organized by date.

3

So I change the default behavior by clicking on “More options” on the “Import Photos and Videos” screen.

2

For “Folder name”, I change it to be “Date Taken + Name”. That means for any photos I import from my camera in Photo Gallery, Photo Gallery will detect the date taken from the camera and combine that date with whatever name I give the photos I am importing. For example: if you name the photos you are importing “Beach” and they were taken on 1/3/2012, a folder is created called “2011-01-03 Beach” with those photos inside. This allows me to automatically organize any photo I import from my cameras in Photo Gallery by date! In the left-hand navigation you’ll see something that looks like this:

4

Tip #2: Panoramic Stitch

I love creating panoramic stitches in Photo Gallery. Everywhere I go with my camera, I am always thinking about what series of shots will make the best panoramic photo when stitched together. Creating a panoramic stitch is easy:

1. Where ever you’re at with your camera, just stand in one spot and pan from left to right taking a series of photos one by one.

2. Then, import your photos into Photo Gallery from your camera.

3. Select the series of photos you took at that spot go to the “Create” tab in the ribbon at the top of Photo Gallery.

4. Click the “Panorama” button.

And Photo Gallery will stitch together your photos and create a panoramic shot. Now, after your panoramic stitch is created, it might look like this (notice the black areas around the borders?):

DSC02020 Stitch

The black areas can be easily removed by simply using the “Crop” feature in Photo Gallery under the “Edit” tab in the ribbon. You can crop your panoramic stitch however you like.

1

The end result should be something like this:

DSC02020 Stitch (2)

Creating panoramic stitches is something that can be done with almost any camera – from a little point-and-shoot to a DSLR. You can even use photos from your Windows Phone and stitch them together too!

Tip #3: Sharing

Organizing and editing your photos is just one element to Photo Gallery. It also makes it easy to share those photos with the people you want to share them with. You can share your photos to Facebook, Flickr, or of course SkyDrive simply by choosing any of these options under the “Share” section of the ribbon on the “Home” tab in Photo Gallery. Early this last summer, we introduced a major update to SkyDrive (and it wasupdated again this last fall) to make it the best place to access and share your content – including your photos.

5

SkyDrive displays your photos in a “mosaic” layout and displays the photos in their original aspect ratio. There also infinite scrolling – meaning for folders in SkyDrive with tons of photos, you won’t have to navigate from page to page!

6

When you click on a photo, it puts the photo front-and-center – displaying your caption (description of the photo), tags, comments and more!

7

SkyDrive is an absolutely awesome way to share photos with friends and family!

I hope these tips help you organize and share your photos with Windows Live Photo Gallery, your PC and SkyDrive in 2012. Download Windows Live Photo Gallery today!

How Graphics Cards Work

I found this article about Video cards in How Stuff Works. I think it is a pretty good introduction and straight forward, which should help understand what is the video cards job in a computer. I think it is worth to note that this article is old. Although most cards will stay on this concept there are a few notes that I will add to the bottom

Computer Hardware Image Gallery

Computer Hardware Image Gallery

Graphics cards take data from the CPU and turn it into pictures. See more computer hardware pictures.

Introduction to How Graphics Cards Work

The images you see on your monitor are made of tiny dots called pixels. At most common resolution settings, a screen displays over a million pixels, and the computer has to decide what to do with every one in order to create an image. To do this, it needs a translator — something to take binary data from the CPU and turn it into a picture you can see. Unless a computer has graphics capability built into the motherboard, that translation takes place on the graphics card.

A graphics card’s job is complex, but its principles and components are easy to understand. In this article, we will look at the basic parts of a video card and what they do. We’ll also examine the factors that work together to make a fast, efficient graphics card.

Think of a computer as a company with its own art department. When people in the company want a piece of artwork, they send a request to the art department. The art department decides how to create the image and then puts it on paper. The end result is that someone’s idea becomes an actual, viewable picture.

A graphics card works along the same principles. The CPU, working in conjunction with software applications, sends information about the image to the graphics card. The graphics card decides how to use the pixels on the screen to create the image. It then sends that information to the monitor through a cable. ­

Creating an image out of binary data is a demanding process. To make a 3-D image, the graphics card first creates a wire frame out of straight lines. Then, itrasterizes the image (fills in the remaining pixels). It also adds lighting, texture and color. For fast-paced games, the computer has to go through this process about sixty times per second. Without a graphics card to perform the necessary calculations, the workload would be too much for the computer to handle.

The graphics card accomplishes this task using four main components:

  • A motherboard connection for data and power
  • A processor to decide what to do with each pixel on the screen
  • Memory to hold information about each pixel and to temporarily store completed pictures
  • A monitor connection so you can see the final result

Next, we’ll look at the processor and memory in more detail.

Graphics cards take data from the CPU and turn it into pictures. Find out the parts of a graphics card and read expert reviews of graphics cards.

2008 HowStuffWorks

The GPU

Like a motherboard, a graphics card is a printed circuit board that houses a processor andRAM. It also has an input/output system (BIOS) chip, which stores the card’s settings and performs diagnostics on the memory, input and output at startup. A graphics card’s processor, called a graphics processing unit (GPU), is similar to a computer’s CPU. A GPU, however, is designed specifically for performing the complex mathematical and geometric calculations that are necessary for graphics rendering. Some of the fastest GPUs have more transistors than the average CPU. A GPU produces a lot of heat, so it is usually located under a heat sink or a fan.

In addition to its processing power, a GPU uses special programming to help it analyze and use data. ATI and nVidia produce the vast majority of GPUs on the market, and both companies have developed their own enhancements for GPU performance. To improve image quality, the processors use:

  • Full scene anti aliasing (FSAA), which smoothes the edges of 3-D objects
  • Anisotropic filtering (AF), which makes images look crisper

Each company has also developed specific techniques to help the GPU apply colors, shading, textures and patterns.

As the GPU creates images, it needs somewhere to hold information and completed pictures. It uses the card’s RAM for this purpose, storing data about each pixel, its color and its location on the screen. Part of the RAM can also act as a frame buffer, meaning that it holds completed images until it is time to display them. Typically, video RAM operates at very high speeds and is dual ported, meaning that the system can read from it and write to it at the same time.

The RAM connects directly to the digital-to-analog converter, called the DAC. This converter, also called the RAMDAC, translates the image into an analog signal that the monitor can use. Some cards have multiple RAMDACs, which can improve performance and support more than one monitor. You can learn more about this process in How Analog and Digital Recording Works.

The RAMDAC sends the final picture to the monitor through a cable. We’ll look at this connection and other interfaces in the next section.

THE EVOLUTION OF GRAPHICS CARDS

Graphics cards have come a long way since IBM introduced the first one in 1981. Called a Monochrome Display Adapter (MDA), the card provided text-only displays of green or white text on a black screen. Now, the minimum standard for new video cards is Video Graphics Array (VGA), which allows 256 colors. With high-performance standards like Quantum Extended Graphics Array (QXGA), video cards can display millions of colors at resolutions of up to 2040 x 1536 pixels.­

This Radeon X800XL graphics card has DVI, VGA and ViVo connections.

PCI Connection

Graphics cards connect to the computer through the motherboard. The motherboard supplies power to the card and lets it communicate with the CPU. Newer graphics cards often require more power than the motherboard can provide, so they also have a direct connection to the computer’s power supply.

Connections to the motherboard are usually through one of three interfaces:

PCI Express is the newest of the three and provides the fastest transfer rates between the graphics card and the motherboard. PCIe also supports the use of two graphics cards in the same computer.

Most graphics cards have two monitor connections. Often, one is a DVI connector, which supports LCD screens, and the other is a VGA connector, which supports CRT screens. Some graphics cards have two DVI connectors instead. But that doesn’t rule out using a CRT screen; CRT screens can connect to DVI ports through an adapter. At one time, Apple made monitors that used the proprietary Apple Display Connector (ADC). Although these monitors are still in use, new Apple monitors use a DVI connection.

Most people use only one of their two monitor connections. People who need to use two monitors can purchase a graphics card with dual head capability, which splits the display between the two screens. A computer with two dual head, PCIe-enabled video cards could theoretically support four monitors.

In addition to connections for the motherboard and monitor, some graphics cards have connections for:

Some cards also incorporate TV tuners. Next, we’ll look at how to choose a good graphics card.

DIRECTX AND OPEN GL

DirectX and Open GL are application programming interfaces, or APIs. An API helps hardware and software communicate more efficiently by providing instructions for complex tasks, like 3-D rendering. Developers optimize graphics-intensive games for specific APIs. This is why the newest games often require updated versions of DirectX or Open GL to work correctly.

APIs are different from drivers, which are programs that allow hardware to communicate with a computer’s operating system. But as with updated APIs, updated device drivers can help programs run correctly.­

Some cards, like the ATI All-in-Wonder, include connections for televisions and video as well as a TV tuner.

Photo courtesy of HowStuffWorks Shopper

Choosing a Good Graphics Card

A top-of-the-line graphics card is easy to spot. It has lots of memory and a fast processor. Often, it’s also more visually appealing than anything else that’s intended to go inside a computer’s case. Lots of high-performance video cards are illustrated or have decorative fans or heat sinks.

But a high-end card provides more power than most people really need. People who use their computers primarily for e-mail, word processing or Web surfing can find all the necessary graphics support on a motherboard with integrated graphics. A mid-range card is sufficient for most casual gamers. People who need the power of a high-end card include gaming enthusiasts and people who do lots of 3-D graphic work.

A good overall measurement of a card’s performance is its frame rate, measured in frames per second (FPS). The frame rate describes how many complete images the card can display per second. The human eye can process about 25 frames every second, but fast-action games require a frame rate of at least 60 FPS to provide smooth animation and scrolling. Components of the frame rate are:

  • Triangles or vertices per second: 3-D images are made of triangles, or polygons. This measurement describes how quickly the GPU can calculate the whole polygon or the vertices that define it. In general, it describes how quickly the card builds a wire frame image.
  • Pixel fill rate: This measurement describes how many pixels the GPU can process in a second, which translates to how quickly it can rasterize the image.

The graphics card’s hardware directly affects its speed. These are the hardware specifications that most affect the card’s speed and the units in which they are measured:

  • GPU clock speed (MHz)
  • Size of the memory bus (bits)
  • Amount of available memory (MB)
  • Memory clock rate (MHz)
  • Memory bandwidth (GB/s)
  • RAMDAC speed (MHz)

The computer’s CPU and motherboard also play a part, since a very fast graphics card can’t compensate for a motherboard’s inability to deliver data quickly. Similarly, the card’s connection to the motherboard and the speed at which it can get instructions from the CPU affect its performance.

For more information on graphics cards and related topics, check out the links on the following page.

INTEGRATED GRAPHICS AND OVERLOCKING

Many motherboards have integrated graphics capabilities and function without a separate graphics card. These motherboards handle 2-D images easily, so they are ideal for productivity and Internet applications. Plugging a separate graphics card into one of these motherboards overrides the onboard graphics functions.

Some people choose to improve their graphics card’s performance by manually setting their clock speed to a higher rate, known as overclockings. People usually overclock their memory, since overclocking the GPU can lead to overheating. While overclocking can lead to better performance, it also voids the manufacturer’s warranty.

Lots More Information

Related HowStuffWorks Articles
More Great Links
Sources

Added notes: although the article has a section on how to chose a video card, using only numbers won’t help a lot. To chose the correct video card, you need to have in consideration, power requirements, heat dissipation and application. Although this sounds more complicated it is easy to relate to.

  • Power Requirements. Your Power supply not only needs to provide enough steady power (Watts), but you also need to consider if the extra power is worth it. Sure, you can spend $500 on a video card and not have to worry about being fast enough, however your electric bill will be higher, and your PSU (Power Supply Unit) will need to provide probably +650 Watts for a card of that caliber
  • Heat Dissipation: more power usually means more heat, but not always. As technology advances, companies are getting better at optimizing the components so they can computer the same amount with less power, and less heat. For example, NVIDIA GF100 GPU (Ferni) (GeForce GTX 465/470/480) were powerful GPUs, but the GF110 (GeForce GTX 560/570/580) produced considerable less heat (1)
  • Application: this is a bit more difficult to quantify. If you are a gamer, what kind of games will you play? How long are you planning to keep your current setup and budget. If you do multimedia, to what extend? Do you edit video and pictures? Do you only browse the web? Will you use Windows 7?
    • All this questions have different answers.  A very common problem I saw with Vista and Windows 7 was inappropriate graphics cards. Not playing video games, does not mean a cheap graphic card. Windows will still use the graphics card if it can to draw your desktop, so a slow graphics card means the CPU might end up doing the work instead, and performance will suffer a hit.
    • Multimedia is also an important factor, video and image editing software can use video cards to process. This is kind of new on mainstream. Since a GPU is faster than a CPU Photoshop CS5.5 can work faster on images when using NVidia cards with CUDA support or AMD with OpenGL 2.0.
    • Some video encoding software also uses specific video cards to offload work to the GPU. I have personally tested a couple, and while the conversion process was not faster, because the GPU was doing the work, I was able to do other stuff like document editing and web browsing without feeling the computer slow.

To chose a video card I suggest using Toms Hardware Best Graphics card for the money. It is still not perfect science, but it will steer you a lot better in the right direction.

Integrated video: Although sufficient for most cases, it might not always be so. For example, and older computer with Windows 7, might not display the desktop correctly. The fix would be to change the theme to a basic setup (which removes a lot of effects in favor of performance) or getting a inexpensive graphics card. The cheapest card in Toms’s hardware list  (~$75) will be more than enough for this.

While talking about integrated video, I can also point out that I feel AMD’s FUSION (APU) solutions to be more responsive in video than Intel (Core ix). I consider Intel integrated graphics (HD graphics 3000/2000)  to be adequate only for light graphics, and windows experience score is proof of that. However it will be enough for Blu-ray in most cases**

Integrated and Blu-ray: decoding (playing) Blu-ray can be quite heavy. If the video card is not doing the decoding then the CPU will do the decoding. There is also a consideration for the sound which is protected over HDMI, falling back to a lower encoding if not on protected channel. This is beyond the scope, but should be considered if the main use will be blu-ray playing.

Motherboard connection: Many mid to high end cards will take 2 slots on the motherboard. This is a special consideration on multi graphic cards setup (SLI/CrossFire) since 2 cards would take 4 expansion slots and 3 would take 6, which will also make cooling and power more of an issue.

When the article was explaining about the connectors (VGA, DVI) it is not accurate. There are LCD monitors with only VGA, as well as CRT monitors with BNC connectors. CRT have a higher refresh rate producing crispier image, although LCD are getting to those levels now and even surpassing on higher end models.

There are also HDMI and display port. HDMI is similar to DVI, but can carry audio signal (if graphics cards allows it).

Display port is a bit more complicated, but more interesting. Check this link for information

http://www.techradar.com/news/computing/hdmi-vs-displayport-which-is-best–922876?artc_pg=2

Update for January 22, 2016: it has been 4 years since the article, and the information still valid. However a few more updated notes

CRT monitors are difficult to find. They still produce the sharpest image when using a good quality monitor, but OLED is aiming to fix that.

New issues with choosing video cards now are the monitors in use, and the intended quality. Most likely people looking for a video card are planning to play video games in the computer. With 4K, 2K, WQHD resolutions and all high resolutions a better video card is needed to play the games in those resolutions.

For example, the LG 25UM57-P is a 25″ monitor with a resolution of 2560×1080 (2K). The famous 1080p resolution is actually 1920×1080, this means that there is 640×1080 more pixels to draw. And it does help, because the HUD components are not in the middle anymore. But it will require a bit more power.

Another problem with that, is that not all games with support those resolutions, but that is outside the scope of this article. For that use something like Flawless widescreen

4K monitors, bring the point I just made to  whole higher level. 4K is 3840 × 2160, but can be 4096 x 2160 and even 5120 x 2160 in Ultra Wide format. This is twice the resolution of 1080, which means twice the data has to be draw and sent to the monitor.

So, you are probably thinking, OK, I need a more powerful video card for the higher resolutions, but that is not all.

Because it is twice the data, the connection is important as well. I use 2K monitors, and they will only offer that resolution if connected by dual DVI-D or Display Port, and not on HDMI or VGA. So now the connectors are important as well.

This is not the same for all and depends a lot on the HDMI used. For example HDMI 1.4 supports 4K, but at 25Hz, HDMI 2.0 supports 4K up to 60 fps.

For HDMI version char visit the following Wikipedia article

https://en.wikipedia.org/wiki/HDMI#Version_comparison

3D will be something else to consider, and will require a special monitor as well, and gets the topic more complex. I won’t even touch virtual reality because it is too new at the moment.

I personally would say, at those levels don’t bother with HDMI, and instead of Display Port. The reason for that is that most 2K and higher monitors will have Display Port (or mini display port), so as long as your video card has the port it is a safer connection to get the best quality

Update about the need of powerful cards:

Toms Hardware has a good articles about best CPUs for gaming. But wait, this is a video card article, why are we talking about CPU? Easy, the CPU charts pretty much say that a Core i5 is great for gaming, and that a Core i7 wont give you much. So, expend that budget on video cards. Great advice, except for one thing

There are circumstances where you won’t need more video power. This is changing a lot at the moment, but not fast enough either.

PC gaming has been lagging lately. It is an undeniable truth that most gaming happens in consoles, and consoles is where the money is for developers. After all, they charge more for the console version than they do for the PC, and PC games prices tend to discount much faster and farther than consoles.

So, generally what has happened for the last 7 years, is that a game was developed for Xbox360 and PS3, and then ported to PC. The ported PC version generally came from Xbox360, but the Xbox360 has a different CPU architecture so the ports are not perfect.

This results in games capping at 60fps, 1080 with max resolution, or high details that are very minimum compared to medium details. Some ports had just awful quality and looked like PS2 versions (and some ports are from PS2 directly). This games don’t generally tax the video card, or some taxed the video card only when 1 setting was enabled (because it was improperly applied, or ported).

So, now PS4 and Xbox One are out. 4th Generation consoles that can run games at 1080/60fps. Does this mean we need more powerful video cards? Not really. The video cards are already more powerful that what is inside the console, but games are still not optimize to the new consoles. The consoles are about 2 years old, and still not a lot of games run at 1080/60fps, but instead they run at 1080/30fps. PC games have been running at 1080 and way above 60 fps for a long time.

What is does mean is that we will see more demanding games, but at the moment my Radeon HD 7870 which is a few years old, still can play most games that I own on the highest settings

**Update for Integrated video. Integrated video has come a long way since 4 years ago. I have tested and used a lot of integrated video, and I have been pleasantly surprised.

In no way Integrated video can compare to a dedicated video card, but the 6th generation of Intel HD graphics (Intel Core i3/5/7 6xxx)can support up to 3 displays, and most configurations will have at least a mini Display Port. This is important because before for multi monitor setups for office I would have used a dedicated video card. Now a days I use the integrated video, and I even have a Gigabyte Brix Pro that has the Intel Iris  video and can play Tomb Raider at a comfortable medium detail level.

The Intel HD graphics also has quick sync for video decoding which actually does help.

In summary, video cards changed a lot. Reading the article will help understand how they work, but a lot has changed, and will continue to change, so it is a moving target.

Check Toms Hardware web site for the graphics card chart and Best graphics card for the money to get more specific and updated recommendations. They update the recommendations monthly.