Planning Analytics

 View Only

Managing Disk Space through PAW

By George Tonkin posted 2 days ago

  

Managing Disk Space through PAW


Introduction

In many instances you have finite disk space that you need to manage and ensure that you maintain and prune logs, backups and temporary files so that the model does not run into an out of disk space situation.

With the deprecation of the RDP environment on PAoC being able to see where your disk space is being allocated may be a bit trickier than it has been, even with the recent addition of the Drive Explorer. Currently Drive Explorer gives you an overall view use how much space is available based on your quota, as well as file sizes on individual files but no aggregates or sizes at a folder level or visualisations to make the interpretation simpler and faster.

This article will take you through building a tool to allow you to quickly view your space consumption and manage it more effectively.

Building the Tool

To build the tool we need a few components:

·         A PowerShell script to create a file with the folders, files and their sizes

·         A dimension to store the folder structure and the file names

·         A cube to store file sizes

·         Some user preferences to allow us to change the path to interrogate

·         A TI process to execute the PowerShell script and update our cube

·         A PAW book to bring it all together and show us an appropriate visualisation

PowerShell Script

Create a PowerShell script called GetFileSizes.ps1 on your local machine using a text editor like Notepad++.

Copy and paste the code below into the new file in the text editor and then save it.

GetFileSizes.ps1

# Requires $folderPath as an argument
# Add own validation to check if path exists etc.

# Accept the folder path as an argument from the command line or set a default value
param (
       [string]$folderPath = ".\"  # Default value if no argument is provided
)

# Get all files recursively from the folder
$files = Get-ChildItem -Path $folderPath -Recurse -File

# Create a list of objects with desired properties
$fileList = $files | ForEach-Object {
    $fullPath = $_.DirectoryName
    $relativeFolderPath = if ($fullPath -eq $folderPath) {
        '.'  # Root directory
    } else {
         $fullPath  }

    [PSCustomObject]@{
        FolderName = ($relativeFolderPath + "\")
        FileName = $_.Name
        FileSize = $_.Length
    }
}

# Export the data to files.csv
$fileList | Export-Csv -Path "files.csv" -NoTypeInformation

Copy/SFTP the script to the Data folder of your TM1/PA instance. This could obviously be added to another folder like a scripts folder and you could adjust the TI process accordingly but for now keeping it simple so that when we execute the PowerShell script it is in the same folder as TM1 executes from.

Create the File Size Cube

We need to create two new dimensions:

·         _S-File Size Files

·         _S-File Size Measures

_S-File Size Files Dimension

Create and save the dimension. You can add a dummy member like # so that the dimension is not empty when we want to create a based view. Relevant members will be added by our process each time it is run.

Edit the dimension properties to set the order and ensure that we view as a hierarchy:

_S-File Size Measures Dimension

Create and save the dimension with a single measure called Size of type Numeric.

Right-click the member and Format values to Rounded as we do not need decimals.

_S-File Size Cube

Create a new cube called _S-File Size Cube and allocate the _S-File Size Files and _S-File Size Measures to it.

Create a view with _S-File Size Files on rows and Size from _S-File Size Measures on columns per the following MDX:

SELECT 
   {
      [_S-File Size Measures].[_S-File Size Measures].MEMBERS
   } ON 0,   
   {
      [_S-File Size Files].[_S-File Size Files].MEMBERS
   } ON 1
FROM
   [_S-File Size]

This view will form the basis of our visualisation and should look like the one below:

Save the view as Default, and as Public and MDX.

User Preferences

I use a User Preferences cube for various references, especially to drive MDX queries based on their selected values. This may be a Period, a Cost Centre or a key word e.g. to filter a set by wildcard.

What I need is to add a new member to my User Preferences to store the folder that I want to interrogate.

We will assume that you do not have a User Preferences cube and create it.

User Preferences Dimension

Create a new dimension called _S-User Preferences and add a new member called File Size Folder as a String member.

User Preferences Cube

Create a new cube called _S-User Preferences and add }Clients and the _S-User Preferences dimensions as dimensions.

We need to create a set for the current user so that when we reference any views, we look at information relevant to our user. To do this, open the Set editor and add the following MDX code then commit, review and save as _S-Current User

Set MDX:

STRTOMEMBER("[}Clients].[" + UserName + "]")

Open the MDX editor for the view and keep the MDX for the view for later:

SELECT 
   {
      [_S-User Preferences].[_S-User Preferences].[File Size Folder]
   } ON 0, 
   {
      STRTOMEMBER("[}Clients].[" + UserName + "]")
   } ON 1 
FROM
   [_S-User Preferences]

You can also update the value of the File Size Folder and set it to: ..\

This will allow us to review files from one folder higher than the current one i.e. the Data folder and should include logs, backups etc.

TI Process

Create a new TI process that will execute the PowerShell script, rebuild our dimension and populate our cube.

I called my process _S-System-File Sizes to group it with my other system processes.

We will update the process in two stages as we need a file for our data source but do not have one yet. Once we create it, we can then use it as our data source.

Update the Prolog section per below:

#Section Prolog

nRecordCount = 0;
sSourceFile = 'files.csv';
sFileSizeFolder = CellGetS( '_S-User Preferences', TM1User(), 'File Size Folder' );

DimensionDeleteAllElements( '_S-File Size Files' );

IF(FileExists( sSourceFile ) = 1);
    ASCIIDelete( sSourceFile );
ENDIF;

ExecuteCommand( 'cmd /c powershell .\GetFileSizes.ps1 -folderPath ''' | sFileSizeFolder | '''', 1 );

#=====CUBE LOGGING=====
#--Turn Logging off
CELLPUTS('NO','}CubeProperties', '_S-File Size', 'LOGGING');
#========================

Save the process and then run it.

Once run, there should be a file called files.csv created by the PowerShell script.

Go to the Data source tab of your process and change the Data source  to Location.

Update the Location field to: .\files.csv (using the .\ to indicate the current or Data folder but you could just specify files.csv)

Change the Header records from 0 to 1 then click Preview to see available fields.

Update the Variable names to vFolder, vFile and vSize and set them all to String.

Go back to the Script tab and update Metadata, Data and Epilog with the code below:

#Section Metadata

sPreviousParent = '';

#=====UNPACK AND BUILD PATH=====
nDelimiter = SCAN('\', vFolder);
WHILE( nDelimiter > 0 );
     sComponent=SUBST(vFolder, 1, nDelimiter);
     sParent = sPreviousParent | sComponent;

     DimensionElementInsert( '_S-File Size Files', '', sParent, 'C' );
     IF(sPreviousParent @<> '');
          DimensionElementComponentAdd( '_S-File Size Files', sPreviousParent, sParent, 1 );
     ENDIF;

     vFolder=DELET( vFolder, 1, nDelimiter );
     nDelimiter = SCAN('\', vFolder);
     sPreviousParent = sParent;
END;

#--Add Leaf level files and link to folder
DimensionElementInsert( '_S-File Size Files', '', sParent | vFile, 'N' );
DimensionElementComponentAdd( '_S-File Size Files', sParent, sParent | vFile, 1 );

nRecordCount = nRecordCount + 1;
#==========================

#Section Data

#=====UPDATE CUBE=====
#--Write File Size to leaf element
CellPutN( Numbr(vSize), '_S-File Size', vFolder | vFile, 'Size' );
#=====================

#Section Epilog

#=====CUBE LOGGING=====
#--Turn Logging off
CELLPUTS('YES','}CubeProperties', '_S-File Size', 'LOGGING');
#========================

Save the process then run it to test and ideally update our dimension and populate our cube.

My view has the following:

The above values are in bytes as returned by the PowerShell script. You could also scale these within the TI process or by changing the PowerShell script if you want to review sizes in kilobytes, megabytes etc.

Build the PAW Book

There are basically 5 components to the book that we will be building:

  1.   Field to input/update the Folder to interrogate
  2.    A view to show the Size of the selected folder
  3.    A button to call our process and refresh the dimension and cube
  4.    Selector widget to select the folder for review
  5.    Tree Map exploration to show the contents and relative size of the files and folders

Folder Field

Create a new book and save it in an appropriate location as Files Sizes or similar name.

Update the name of the tab accordingly too.

I typically change my canvas to Absolute layout positioning and set the width to 1850px and height to 780px to maximise the area without having scrollbars but adjust this based on your resolution.

Add a text box and change the contents to Folder:

Set the font to IBM Plex and size of 16. You can adjust the width to 100 px and height to 40 px.

Adjust the inner margins to 0 so that we can position close to the top left corner of the book.

Open a view from the _S-User Preferences cube then update the MDX to show the _S-Current User set on the rows and File Size Folder on columns.

Adjust the width of the File Size Folder column to take up about 20% of your screen. You can fine tune later.

Collapse Overview then Hide it as we do not need any of these items.

Go into the view properties and disable all the Table style, Set editor and Toolbar settings as these will not be required.

Adjust the size of the view to fit the field. Add a black border from the Appearance options.

You should have a book that looks similar to the one below:

Size Field

Duplicate the Folder text box and change the text to Size:

Position it below the Folder text box.

Add the default view we created on our _S-User Preferences cube.

Follow the same steps as previously to adjust the view to make it look and feel like a single cell.

Adding a Single Cell widget does not seem to refresh during synchronisation which is why a view is being used. Maybe this will be resolved in the future and could be used.

Adjust the width of the size field to about half the width of the Folder field as we want a small gap then our Refresh button.

Your book should look something like the following:

Refresh Button

Add an action button after the Size widget and align it to the end of the Folder widget.

Change the text to Refresh and set the font to 16 and the fill colour to something like dark blue.

Change the button properties to do a Run Process and select the database and the process we just created. Tick Refresh after execution as we want to see the updated visualisation.

Our book now looks something like this:

Folder Selector

Next we need to add our selector widget to allow us to click on a particular folder or file and see the contents and size in the exploration.

From the database tree, expand the _S-File Size cube then Dimensions to show _S-File Size Files.

From the kebab menu (three dots) add a Selector List, Single List widget

Position the list underneath Size and then stretch it to the bottom of the page and across to align to the end of the Refresh button.

Access the properties window and then disable the following Custom properties:

·         Show header

·         Show icon

·         Show overflow menu

In the General properties, update the Border color to black.

I added in some additional members for illustrative purposes and your view should be similar to this:

Tree Map

The last piece of the puzzle is the Tree Map visualisation to render the values in the cube in a way that makes it easier to see what is consuming/occupying the disk space.

From the database tree, add in the default view from the _S-File Size cube.

Stretch the view to align to the bottom of the selection list and all the way across to the right margin of the book.

Change the visualisation from Exploration to Tree Map.

In the properties, change the border to black.

In the Visualization properties, expand Legend and disable both options.

On the toolbar, click on the Hide all Totals icon, next to the Format values icon, and enable the options using the toggle. We do not want to show the total and the components in our view.

Go to the Synchronize properties and enable the _S-File Size Files only and then enable Synchronize hierarchies.

Also update the synchronisation on the widget showing the Size to ensure the size of the selected folder or file is shown.

Once synchronised, the book should be functioning and you can test by clicking on items in the selector list.

As you select an item, the Tree Map should update to show a view of the contents of that folder.

Changing the value in the Folder field to something like: ..\..\ for PAoC instances would allow you to build a list of all files and folders for all your databases and give insights into which databases may be an issue and what is causing the issue e.g. logs, feeders, backups etc.

Tree Map showing Folder and File Size

Conclusion

Assuming you were able to follow the steps and build the tool, you should now have the ability to manage your disk space without the need to access the server folder, run 3rd party tools etc. to identify folders and files to be actioned.

The tool will not remove any files, merely show you where you need to intervene. You can use tools like FileZilla for your PAoC environments or potentially extend the solution to allow you to run AsciiDeletes on files.

The cube itself cube be extended to include a count measure and the TI to write a 1 for each file to give you a total number of files in each folder or within the branch. Similarly, changing the PowerShell script to give you date modified could assist when looking at pruning old files that may not included a timestamp. Many more options to extend this basic example to make it your own.

Hope you found this article insightful and will find the tool useful.

As always, keen to hear your feedback and suggestions.

0 comments
3 views

Permalink