Operating system Minimum page file size Maximum page file size; Windows XP and Windows Server 2003 with less than 1 GB of RAM: 1.5 x RAM: 3 x RAM or 4 GB, whichever is larger: Windows XP and Windows Server 2003 with. NTFS (New Technology File System) is a proprietary file system developed by Microsoft. Starting with Windows NT 3.1, it is the default file system of Windows NT family. NTFS has several technical improvements over FAT and HPFS. File Server Management with Windows Power. Shell Managing file servers can be a tedious and thankless task for many IT pros. But it doesn't need to be that way. By incorporating Windows Power. Shell, you can easily get a handle on shared file resources - - and maybe even have a little fun. And none of it really requires any scripting. Everything I want to show you should take no more than a few interactive commands. Of course, if you find yourself running these commands often, then turning them into a script or function is the next logical step. The beauty of Power. Shell is that you can take the results of the commands that I'm going to demonstrate and do whatever you want. Need to save the results to a comma- separated value (CSV) file? Pipe them to Export- CSV. Need an HTML report? Use Convert. To- HTML. And everything I'm going to show you scales; if you can use a command for one computer, you can use it for 1. First, let me show you what you can do to manage what you have in your file shares today. Then we'll look at provisioning file sharing. Get File Shares Let's begin by identifying what's being shared. This task is easy: Simply query the Win. Share class to use Windows Management Instrumentation (WMI). You don't even need to be logged on to the file server. You can run this command from the comfort of your cubicle: Get- Wmi. Object - class - Win. All file systems supported by Windows use the concept of files and directories to access data stored on a disk or device. Discusses how to overcome the 4,095 MB paging file size limit in Windows. Share - computername My. File Now, when you run this command, you'll get all shares, including printers (if any). Because we're talking about file shares, let's limit the query. All Win. Share instances have a Type property, as Table 1 shows. Thus, to limit the search, we can add a filter to our original command: Get- Wmi. Object - class - Win. Share - computername My. File - filter "Type=0" This approach gets rid of the administrative shares. You can see an example in Figure 1. Figure 1: Listing Non- Administrative Shares with WMI But if you're looking for other hidden shares - - that is, those that end in a dollar sign ($) - - all you need is a slight tweak to the filter: Get- Wmi. Object - Class win. My. File - filter "Type=0 AND name like '%$'" In WMI, the percent character (%) is used as a wildcard. Returning all shares except those that are hidden is a little trickier. You'll need to use a compound comparison using a wildcard: Get- Wmi. Object - Class win. My. File - filter "type=0 AND name like '%[^$]'" This command returns all Win. Share objects that have a Type property of 0 and a name that doesn't end in $. Custom Windows driver development, file system filter development and windows driver training.Get Folder Size A typical task that's probably on your plate is creating reports about how much disk space a folder is consuming. The quick approach is to simply use Get- Child. Item, or its alias dir, and pipe the results to Measure- Object: dir c: \shares\public - recurse | where {- Not $_. PSIs. Container}| Measure- Object - Property length - Sum - Minimum - Maximum You'll end up with a measurement object that shows the total number of objects, the total size in bytes, and the smallest and largest file sizes. In the previous command, I've filtered out folders. Power. Shell 3. 0 has better ways of doing this, but the command that I've used works in both Power. Home; Articles & Tutorials; Windows Server 2012; ReFS: What you need to know about the Resilient File System (Part 1) by Deb Shinder [Published on 24 March 2015 / Last Updated on 24 March 2015]. Shell 2. 0 and 3. This is the type of command that is best run locally (a great reason to use Power. Shell remoting). The code in Listing 1 combines this command with our WMI technique to get a size report for top- level folders. You can format or process $results any way you like. How about an easy- to- read table? Just use this command: $results | Format- Table Computername,Fullname,Size. KB, Number. Files - autosize Figure 2 illustrates the output that you can expect. Figure 2: Easy- to- Read Output It doesn't take much more effort to build a comprehensive usage report for all shares on a file server. I'll save you the time: Take a look at Listing 2. Again, I can slice and dice $results any way I need. Figure 3 shows one approach. Figure 3: Usage Report for File Server Shares Get Files by Owner A variation on this theme is to find file usage by owner. If you use quotas, you most likely already have reporting in place. Otherwise, all you need to do is retrieve the file ACL, which includes the owner, and aggregate the results. I find that the best approach is to add the owner as a custom property: $data=dir | where {- not $_. PSIs. Container} | select name, @{Name="Owner"; Expression= {(Get- ACL $_. Owner}}, length We can group this output by the new owner property, and then process the new object: $data | group owner | Select Name,Count,@{Name="Size"; Expression= {($_. Group | Measure- Object - Property Length - sum). Sum}} With just a little effort, you can apply this approach to a file share, as the code in Listing 3 does. I should also point out that you might run into issues if filename paths are longer than 2. Get- ACL. In Power. Shell 3. Literal. Path, which helps. I'll admit that some of these examples are getting to be a bit much to type - - and they aren't the only way to accomplish these tasks, but the point is that you can. You can also get by with less detail and structure for ad hoc style reporting. Figure 4 illustrates this code sample with the results formatted as an easy- to- read table. Figure 4: Ad Hoc Reporting Get Files by Age The last reporting technique that I want to demonstrate is building a file aging report. Actually, what we're creating is a collection of objects that we can re- use in several ways. You might want to use the objects to delete or move files, or you might want to build a report that can be emailed to management. Always construct Power. Shell commands with maximum reuse and flexibility in mind. Capturing file aging is a tricky thing. In Power. Shell, the file object has several properties that you might want to use. For example, get- item c: \work\wishlist. Format- List Name,*time produces the output that Figure 5 shows. Figure 5: File Aging Data Personally, I find it best to use Last. Write. Time to indicate when a file was last touched. I've seen situations in which Last. Access. Time is updated through third- party tools such as virus scanners, which can lead to erroneous conclusions. And Last. Access. Time has been disabled by default since the days of Windows Vista, although you can re- enable it. You also need to be careful because these values can change depending on whether you're copying or moving a file between volumes. But you can decide for yourself. Using this file as an example, we can have Power. Shell tell us how old the file is, as Listing 4 shows. The Age property is a Time. Span object, and the Days property is merely the Total. Days property of that object. But, because we can do this for a single file, we can do it for all files. Let's look at my public share and find all the files that haven't been modified in 4. Select Fullname,Creation. Time,Last. Write. Time, @{Name="Age"; Expression={(Get- Date)- $_. Last. Write. Time}}, @{Name="Days"; Expression={[int]((Get- Date) - $_. Last. Write. Time). Total. Days}}, @{Name="Owner"; Expression={(Get- ACL $_. Owner}} | Where {$_. Days - ge 4. 00} | Sort Days - Descending I went ahead and included the file owner. Figure 6 shows the results from running this code in a remote session on my file server. Figure 6: Running Code in a Remote Session I could save these results to a variable and reuse them however I want. Because I have the full filename, piping the variable to a command such as Remove- Item wouldn't be too difficult. Another approach to file aging is to build a report, or object collection, based on file age buckets. A little more effort is involved; after we calculate or determine the age element, we need to add some logic to do something with it. One of my favorite techniques is to find out how many files were last modified, by year. Again, I'll use the interactive remoting session on my file server to demonstrate: dir c: \shares\sales - recurse | Select Fullname,Last. Write. Time, @{Name="Age"; Expression={(Get- Date)- $_. Last. Write. Time}}, @{Name="Year"; Expression={$_. Last. Write. Time. Year}} | Group- Object Year | Sort Name As you can see in Figure 7, it looks as though some cleanup is in order. If I need more detail, I can always analyze the Group property, which is the collection of files. Figure 7: Discovering Modified Files by Year Finally, what about aging buckets? It might be useful to know how many files haven't been modified in 3. Unfortunately, there isn't an easy way to use Group- Object for this, so I need to take a more brute- force approach; take a look at Listing 5. Figure 8 shows the result when I run this code against my scripts folder, which I know has a decent age distribution. My code doesn't include the actual files, but it wouldn't be too difficult to modify my example. What you need to know about the Resilient File System (Part 1) : : Windows Server 2. Articles & Tutorials : : Windows. Networking. com. If you would like to be notified when Deb Shinder releases the next part of this article series please sign up to the Windows. Networking. com Real time article update newsletter. Introduction. The Resilient File System (Re. FS) was introduced in August 2. Windows Server 2. Microsoft’s venerable NT File System (NTFS). In this article, we’ll take a look at how Re. FS came to be and where it is two years after its release, then we’ll discuss how to work with Re. FS in Microsoft’s current operating systems and speculate a little about where the Windows file system might be headed in the future. A Brief History of Windows File Systems. The file system used by a computer operating system defines how data is structured, stored and handled. An operating system can include support for more than one file system. For purposes of this article, we’re talking about disk file systems, although there are many other types such as network file systems (SMB, NFS), optical disc file systems such as UDF and flat file systems such as that used by Amazon’s S3 cloud storage service. In particular, we’re focusing on the disk file systems supported by Windows. Those of us who have been working with Windows for a long time have been through several evolutions of the file system upon which the OS depends. MS- DOS and Windows 3. MS- DOS) used the FAT (File Allocation Table) file system. For the limited uses of which operating systems and applications were capable at the time, it worked. It was simple and provided good performance. But it had some serious limitations that came to light as subsequent versions of the Windows operating system became more sophisticated and applications became more demanding. The biggest problem with FAT was that hard drives kept outgrowing it. The earliest version of FAT was an 8 bit file system that most of us never used. The next version of FAT was known as FAT1. MS- DOS and IBM’s PC- DOS that ran on many of the early personal computers in the 1. PCs started to really catch on. FAT 1. 2 had a disk size limit of 3. Mi. B, or 2. 56 Mi. B with a 6. 4Ki. B cluster size). Note: Mi. B stands for Mebibyte, which is slightly different from MB or megabyte. For a full explanation of the difference, see this article. This limitation wasn’t a problem with my first “gigantic” hard drive that could storage a whole 1. MB, but available drive sizes quickly expanded in response to ever- growing file sizes, and soon we needed more space. Unfortunately, the next version of FAT, the original FAT1. MB limitation for 5. KB clusters. A few years later, we got some relief in the improved version of FAT1. FAT1. 6B), which was able to handle partitions up to 4 GB in size. It still wasn’t enough, so Microsoft brought out FAT3. Windows 9. 5 OSR2. Windows 9. 8 let you convert old FAT1. FAT3. 2 without reformatting and losing your data. FAT3. 2 could support a volume that was 3. GB in size. Note: Also note that the partition/volume size supported is also dependent on the operating system as well as the file system format. The theoretical limitations of a file system are also usually greater than the practical limits imposed by the hardware and OS. Partition size limitations wasn’t the only problem with FAT. The NTFS file system was first introduced by Microsoft in 1. Windows NT 3. 1, the “business” fork of the Windows OS. In addition to better support for larger partitions and file sizes, NTFS has better reliability and error recovery, faster performance, and better security through file- level permissions/access control lists. NTFS further supports file- level encryption and compression, which FAT does not do. NTFS has been the file system of choice for twenty years, but it’s getting a bit long in the tooth. In the 1. 99. 0s, Microsoft was already experimenting with a new file system called Object File System (OFS) but it never saw the light of day. Then in the early 2. Microsoft again raised expectations about a new file system that was code named Win. FS, which included a relational database and a shared schema. It was expected to be released as part of Windows Vista (at that time known by code name Longhorn). That didn’t happen, either. NTFS continued to endure. Introducing Re. FSFinally, a new file system did emerge in 2. Microsoft released Windows Server 2. Re. FS. Today’s computing needs are very different from those in the days of FAT. We routinely create huge presentation and video files that can be several gigabytes in size. A single HD movie file can be 2. GB or more. Hard drive capacity if measured in terabytes. Re. FS supports a theoretical maximum volume size of one yottabyte, which is equal to one trillion terabytes. A single file on an Re. FS volume can be 1. That should be sufficient to handle expanding file sizes for a while. Re. FS is designed as a “next gen” file system for Windows, but current versions of the OS – Windows Server 2. R2, Windows 7 and 8/8. NTFS and even FAT3. To format a drive larger than 3. GB in FAT3. 2, though, you’ll need a utility such as fat. Most hard drives on the new operating systems are still formatted in NTFS, and you won’t see the option to format a partition in Re. FS in the Windows 8. Re. FS is currently supported only for use by file servers, and you have to jump through some hoops to enable full read/write support for Re. FS (which isn’t something you would normally want or need to do other than for experimental purposes). That involves creating a new registry key, after which you can format new partitions in Re. FS using the diskpart utility. For instructions on how to do that, check out this article. While Re. FS has some great new features that NTFS doesn’t, it’s also lacking some of NTFS’s capabilities. You can’t (at least at this time) boot Windows from an Re. FS volume and the first versions of Re. FS don’t include file- level compression and encryption, disk quotas or hard links, all of which are advantages of NTFS over the FAT file systems. Note that Re. FS does support sparse files, reparse points, case- sensitive file names and Unicode in file names and perhaps most important, it preserves and enforces access control lists (ACLs). It’s obvious that Re. FS in its current iteration is not a replacement for NTFS, but rather is intended for use in specific circumstances, particularly for storage of very large data sets. The different underlying structure and lack of support for some of the features that we’ve come to take for granted in NTFS present a “gotcha” for anyone who would aspire to use Re. FS as the main file system, because some applications that rely on specific NTFS features might not work with Re. FS. Many NTFS disk tools don’t work with Re. FS, either, because it handles the file metadata differently. Now for the good news: Storage of most conventional data doesn’t require the specific NTFS features that aren’t supported by Re. FS and so Re. FS can handle that duty nicely. Its primary use case is on file servers that store extremely large amounts of data. It has data integrity and recovery mechanisms built into the file system, as well. That means those tools that are designed to detect and repair file corruption in other file systems aren’t necessary, so their incompatibility with Re. FS isn’t really an issue. Additionally, although Re. FS doesn’t support file level (Encrypting File System) encryption, Bit. Locker can be used to protect Re. FS volumes so that’s not so much of an issue, either – and with today’s gigantic hard drives that cost only a few pennies per gigabyte, does anyone really use disk compression anymore anyway? Summary. Re. FS has some distinct advantages over current reigning Windows file system NTFS, but it also has some drawbacks. It boasts self- healing powers, ability to repair files without down time, less risk that data will be lost when there’s a power failure (due to the way it writes metadata), and of course the ability to create huge volumes and files and even give those files names that are longer than 2. But it’s not quite ready for prime time yet. Disk quotas are commonly relied upon in business environments to prevent “data hoarders” from overloading the file servers, for example. The new file system is expected to undergo some changes in the next generation of Windows (Windows 1. Will it someday replace NTFS? That was the obvious plan of Steve Sinofsky, who wrote an enthusiastic blog post about the next- gen file system back in early 2. Of course, it took quite some time for NTFS to overtake FAT3. Windows client systems, in particular. Such transitions don’t (and probably shouldn’t) generally come quickly. In the meantime, Re. FS is being used primarily with Microsoft’s Storage Spaces feature that some are predicting will be a hardware RAID killer. In Part 2 of this series, we’ll look at how Re. FS and Storage Spaces work together. If you would like to be notified when Deb Shinder releases the next part of this article series please sign up to the Windows. Networking. com Real time article update newsletter.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. Archives
August 2016
Categories |