Planning Analytics

Planning Analytics

Get AI-infused integrated business planning

 View Only
Expand all | Collapse all

Planning Analytics 3.x on-prem

  • 1.  Planning Analytics 3.x on-prem

    Posted Wed June 25, 2025 03:56 PM

    Does anyone know if IBM has released a technical preview of Planning Analytics 3.x, or how to register for access to one?

    Thanks



    ------------------------------
    Matteo Lorini
    ------------------------------


  • 2.  RE: Planning Analytics 3.x on-prem

    Posted Mon June 30, 2025 10:07 AM

    Hi Matteo,

    IBM recently announced in their latest AMA session that the technical preview of PA 3.1 would be available some time in July, and more detail would follow with regards to registering for the preview.

    Simon



    ------------------------------
    Simon Aylett
    ------------------------------



  • 3.  RE: Planning Analytics 3.x on-prem

    Posted Mon July 28, 2025 01:38 PM

    Hello,

    Is Planning Analytics 3.1 Technical preview available to download?

    Thanks,

    Alex



    ------------------------------
    Alexandre
    ------------------------------



  • 4.  RE: Planning Analytics 3.x on-prem

    Posted Tue July 29, 2025 10:42 AM

    Hi All,

    The Tech Preview should start this week.  I'll post a blog with the details as soon as we are ready.  there will be private group on the community (invite only) to provide feedback.  Reach out to stuart.king@ca.ibm.com for an invite.



    ------------------------------
    Stuart King
    Product Manager
    IBM Planning Analytics
    ------------------------------



  • 5.  RE: Planning Analytics 3.x on-prem

    Posted Wed August 06, 2025 10:18 AM

    Thanks Stuart for sharing this!

    Do you also have any update on timelines for the containerized version for local? High Availability is coming in every discussion that I participate in. 



    ------------------------------
    Subhash Kumar
    ------------------------------



  • 6.  RE: Planning Analytics 3.x on-prem

    Posted Wed August 06, 2025 09:41 PM

    Hi Subhash,

    Reach out directly (stuart.king@ca.ibm.com) on this topic.  Note that the Planning Analytics Local 3.1 technical preview does not include the fully containerized deployment.  PAW and PASS are containerized in the Planning Analytics Local 3.1 preview, TM1 v12 is not.  A fully containerized deployment of PA services (PAW, PASS, TM1 v12) is possible.   



    ------------------------------
    Stuart King
    Product Manager
    IBM Planning Analytics
    ------------------------------



  • 7.  RE: Planning Analytics 3.x on-prem

    Posted Mon August 25, 2025 10:25 AM

    Stuart, I registered for the Planning Analytics Local 3.1 Tech Preview should back in July. Do you know when it will be available?  Thanks



    ------------------------------
    Matteo Lorini
    ------------------------------



  • 8.  RE: Planning Analytics 3.x on-prem

    Posted Fri December 12, 2025 11:35 AM

    Hello Stuart, 

    First sorry I sent you an email about that because I'm too blind to see the "Reply" button in this topic...
    So I repeat what I said in my email, we would like to preview and test the on-premise version 12 of Planning Analytics.


    While some of our clients have already moved to the SaaS version hosted on AWS, others will certainly need to remain on-premise. Many are already showing interest in version 12, and we need to evaluate it in order to answer their questions and prepare accordingly.

    Would it be possible to grant us access to this Technical Preview so we can begin testing?

    From what I understand, only PASS and PAW are currently containerized, while PAL is not yet, though that may be possible.
    Thank you in advance,
    Nicolas


    ------------------------------
    Nicolas LACOSSE
    ------------------------------



  • 9.  RE: Planning Analytics 3.x on-prem

    Posted Fri December 12, 2025 11:44 AM

    Hi Nicolas,

    Apologies for the slow response.  The Planning Analytics product team is no longer providing access to the private preview.  We are working to enable a public technology preview in January 2026.  The public preview will be available to all Planning Analytics Local 2.1 customers.  The pubic preview will also provide updated versions of all Planning Analytics Local 3.1 components.  



    ------------------------------
    Stuart King
    Product Manager
    IBM Planning Analytics
    ------------------------------



  • 10.  RE: Planning Analytics 3.x on-prem

    Posted Mon December 15, 2025 11:17 AM

    Hello Stuart, no worries :) 

    ah, that's a shame. We would have liked to take advantage of a quieter period in the coming weeks to explore the Technical Preview and understand the main changes in v12, as well as its architecture/deployment, etc. Are there any IBM articles or a way to find answers to some of our technical questions?

    Thank you,

    Nicolas



    ------------------------------
    Nicolas LACOSSE
    ------------------------------



  • 11.  RE: Planning Analytics 3.x on-prem

    Posted Wed February 11, 2026 12:40 PM

    Hello Stuart,

    I was able to download the non conterized v12 and try to install it on Linux but I face some issues.

    ./tm1-v12.5.5-standalone-installer-linux.run -- --install-dir /opt/TM1 --deployment-dir /var/lib/TM1 --init true --accept-license
    Verifying archive integrity...  100%   SHA256 checksums are OK.  100%   MD5 checksums are OK. All good.
    Uncompressing TM1 v12.5.5 Installer  100%  
    
    IBM Planning Analytics TM1 v12.5.5 Installer
    
    Created the uninstaller.
    Downloading third-party datastore...
    Installing third-party datastore...
    Registering TM1 as a service...
    Installing TM1 v12 Standalone Linux service...
    Created symlink '/etc/systemd/system/multi-user.target.wants/tm1sd.service' → '/etc/systemd/system/tm1sd.service'.
    Linux service tm1sd installed successfully.
    Starting Linux service tm1sd...
    Linux service tm1sd started successfully.
    Running setup script to generate credentials...
    Setup failed.
    

    And in log files I get 

    {"level":"error","date":"2026-02-11T17:38:03.882Z","message":"An error occurred, attempting to restart the sub-process...","service":"tm1sd","name":"datastore","error":"signal: illegal instruction","restarts":173,"logger":"serviceLogger","stacktrace":"github.ibm.com/TM1/logzap.(*Logger).parseLog\n\t/root/go/pkg/mod/github.ibm.com/!t!m1/logzap@v0.7.0/logger.go:243\ngithub.ibm.com/TM1/logzap.(*Logger).Error\n\t/root/go/pkg/mod/github.ibm.com/!t!m1/logzap@v0.7.0/logger.go:273\ngithub.ibm.com/TM1/tm1/core/go/cluster.(*Process).handleFailure\n\t/workspace/app/tm1/core/go/cluster/process.go:118\ngithub.ibm.com/TM1/tm1/core/go/cluster.(*Process).wait\n\t/workspace/app/tm1/core/go/cluster/process.go:102"}
    {"level":"error","date":"2026-02-11T17:38:04.889Z","message":"An error occurred, attempting to restart the sub-process...","service":"tm1sd","name":"datastore","error":"signal: illegal instruction","restarts":174,"logger":"serviceLogger","stacktrace":"github.ibm.com/TM1/logzap.(*Logger).parseLog\n\t/root/go/pkg/mod/github.ibm.com/!t!m1/logzap@v0.7.0/logger.go:243\ngithub.ibm.com/TM1/logzap.(*Logger).Error\n\t/root/go/pkg/mod/github.ibm.com/!t!m1/logzap@v0.7.0/logger.go:273\ngithub.ibm.com/TM1/tm1/core/go/cluster.(*Process).handleFailure\n\t/workspace/app/tm1/core/go/cluster/process.go:118\ngithub.ibm.com/TM1/tm1/core/go/cluster.(*Process).wait\n\t/workspace/app/tm1/core/go/cluster/process.go:102"}
    {"level":"error","date":"2026-02-11T17:38:05.896Z","message":"An error occurred, attempting to restart the sub-process...","service":"tm1sd","name":"datastore","error":"signal: illegal instruction","restarts":175,"logger":"serviceLogger","stacktrace":"github.ibm.com/TM1/logzap.(*Logger).parseLog\n\t/root/go/pkg/mod/github.ibm.com/!t!m1/logzap@v0.7.0/logger.go:243\ngithub.ibm.com/TM1/logzap.(*Logger).Error\n\t/root/go/pkg/mod/github.ibm.com/!t!m1/logzap@v0.7.0/logger.go:273\ngithub.ibm.com/TM1/tm1/core/go/cluster.(*Process).handleFailure\n\t/workspace/app/tm1/core/go/cluster/process.go:118\ngithub.ibm.com/TM1/tm1/core/go/cluster.(*Process).wait\n\t/workspace/app/tm1/core/go/cluster/process.go:102"}
    {"level":"error","date":"2026-02-11T17:38:06.904Z","message":"An error occurred, attempting to restart the sub-process...","service":"tm1sd","name":"datastore","error":"signal: illegal instruction","restarts":176,"logger":"serviceLogger","stacktrace":"github.ibm.com/TM1/logzap.(*Logger).parseLog\n\t/root/go/pkg/mod/github.ibm.com/!t!m1/logzap@v0.7.0/logger.go:243\ngithub.ibm.com/TM1/logzap.(*Logger).Error\n\t/root/go/pkg/mod/github.ibm.com/!t!m1/logzap@v0.7.0/logger.go:273\ngithub.ibm.com/TM1/tm1/core/go/cluster.(*Process).handleFailure\n\t/workspace/app/tm1/core/go/cluster/process.go:118\ngithub.ibm.com/TM1/tm1/core/go/cluster.(*Process).wait\n\t/workspace/app/tm1/core/go/cluster/process.go:102"}
    {"level":"error","date":"2026-02-11T17:38:07.912Z","message":"An error occurred, attempting to restart the sub-process...","service":"tm1sd","name":"datastore","error":"signal: illegal instruction","restarts":177,"logger":"serviceLogger","stacktrace":"github.ibm.com/TM1/logzap.(*Logger).parseLog\n\t/root/go/pkg/mod/github.ibm.com/!t!m1/logzap@v0.7.0/logger.go:243\ngithub.ibm.com/TM1/logzap.(*Logger).Error\n\t/root/go/pkg/mod/github.ibm.com/!t!m1/logzap@v0.7.0/logger.go:273\ngithub.ibm.com/TM1/tm1/core/go/cluster.(*Process).handleFailure\n\t/workspace/app/tm1/core/go/cluster/process.go:118\ngithub.ibm.com/TM1/tm1/core/go/cluster.(*Process).wait\n\t/workspace/app/tm1/core/go/cluster/process.go:102"}
    {"level":"error","date":"2026-02-11T17:38:08.922Z","message":"An error occurred, attempting to restart the sub-process...","service":"tm1sd","name":"datastore","error":"signal: illegal instruction","restarts":178,"logger":"serviceLogger","stacktrace":"github.ibm.com/TM1/logzap.(*Logger).parseLog\n\t/root/go/pkg/mod/github.ibm.com/!t!m1/logzap@v0.7.0/logger.go:243\ngithub.ibm.com/TM1/logzap.(*Logger).Error\n\t/root/go/pkg/mod/github.ibm.com/!t!m1/logzap@v0.7.0/logger.go:273\ngithub.ibm.com/TM1/tm1/core/go/cluster.(*Process).handleFailure\n\t/workspace/app/tm1/core/go/cluster/process.go:118\ngithub.ibm.com/TM1/tm1/core/go/cluster.(*Process).wait\n\t/workspace/app/tm1/core/go/cluster/process.go:102"}
    {"level":"error","date":"2026-02-11T17:38:09.929Z","message":"An error occurred, attempting to restart the sub-process...","error":"signal: illegal instruction","restarts":179,"service":"tm1sd","name":"datastore","logger":"serviceLogger","stacktrace":"github.ibm.com/TM1/logzap.(*Logger).parseLog\n\t/root/go/pkg/mod/github.ibm.com/!t!m1/logzap@v0.7.0/logger.go:243\ngithub.ibm.com/TM1/logzap.(*Logger).Error\n\t/root/go/pkg/mod/github.ibm.com/!t!m1/logzap@v0.7.0/logger.go:273\ngithub.ibm.com/TM1/tm1/core/go/cluster.(*Process).handleFailure\n\t/workspace/app/tm1/core/go/cluster/process.go:118\ngithub.ibm.com/TM1/tm1/core/go/cluster.(*Process).wait\n\t/workspace/app/tm1/core/go/cluster/process.go:102"}
    {"level":"error","date":"2026-02-11T17:38:10.938Z","message":"An error occurred, attempting to restart the sub-process...","error":"signal: illegal instruction","restarts":180,"service":"tm1sd","name":"datastore","logger":"serviceLogger","stacktrace":"github.ibm.com/TM1/logzap.(*Logger).parseLog\n\t/root/go/pkg/mod/github.ibm.com/!t!m1/logzap@v0.7.0/logger.go:243\ngithub.ibm.com/TM1/logzap.(*Logger).Error\n\t/root/go/pkg/mod/github.ibm.com/!t!m1/logzap@v0.7.0/logger.go:273\ngithub.ibm.com/TM1/tm1/core/go/cluster.(*Process).handleFailure\n\t/workspace/app/tm1/core/go/cluster/process.go:118\ngithub.ibm.com/TM1/tm1/core/go/cluster.(*Process).wait\n\t/workspace/app/tm1/core/go/cluster/process.go:102"}
    {"level":"error","date":"2026-02-11T17:38:11.945Z","message":"An error occurred, attempting to restart the sub-process...","service":"tm1sd","name":"datastore","error":"signal: illegal instruction","restarts":181,"logger":"serviceLogger","stacktrace":"github.ibm.com/TM1/logzap.(*Logger).parseLog\n\t/root/go/pkg/mod/github.ibm.com/!t!m1/logzap@v0.7.0/logger.go:243\ngithub.ibm.com/TM1/logzap.(*Logger).Error\n\t/root/go/pkg/mod/github.ibm.com/!t!m1/logzap@v0.7.0/logger.go:273\ngithub.ibm.com/TM1/tm1/core/go/cluster.(*Process).handleFailure\n\t/workspace/app/tm1/core/go/cluster/process.go:118\ngithub.ibm.com/TM1/tm1/core/go/cluster.(*Process).wait\n\t/workspace/app/tm1/core/go/cluster/process.go:102"}
    

    I've tried a fresh install and follow README file and this documentation https://www.ibm.com/docs/en/planning-analytics/3.1.0?topic=installing-linux

    Have you any feedback or idea that could help please ?

    Thank you

    Nicolas



    ------------------------------
    Nicolas LACOSSE
    ------------------------------



  • 12.  RE: Planning Analytics 3.x on-prem

    Posted Thu February 12, 2026 06:12 AM

    Hello,

    I've solved my issue. It was a virtual CPU generation error.
    If someone face the same case, I'm using Proxmox as Virtual plateform and configured my vCPU with "x86-64-v2-AES" type. But it seems PA v12 binaries need at least "x86-64-v3" because they using AVX instructions (available from v3).
    I've switch the type of vCPU to "Host" and all running fine.

    Thank you

    Nicolas



    ------------------------------
    Nicolas LACOSSE
    ------------------------------



  • 13.  RE: Planning Analytics 3.x on-prem

    Posted Thu February 12, 2026 07:46 AM

    Hi Nicolas,

    I feel a bit guilty for not responding to this sooner.  I'm also a Proxmox user and hit almost the exact same issue with one of the services in Workspace 3.1.  Would probably be better if Proxmox defaulted to the Host CPU type when creating new VMs.

    Just as a heads up, we are still adding some documentation to cover the OIDC configuration and connection to the TM1 12 instance from Planning Analytics Workspace 3.1.  Documentation should be in a better state by next week.

      



    ------------------------------
    Stuart King
    Product Manager
    IBM Planning Analytics
    ------------------------------



  • 14.  RE: Planning Analytics 3.x on-prem

    Posted Thu February 12, 2026 08:27 AM

    Hello Stuart,

    No problem :)
    Do you have an update example of paw.env using a v12 TM1 database until the full documentation please ?

    Maybe I missed something to finish the database configuration because after the end of the deploment of PAL v12 I get my Client ID and Secret but I didin't saw in documentation or README how to create a database, what to do withi client ID and secret etc.

    Thank you

    Nicolas



    ------------------------------
    Nicolas LACOSSE
    ------------------------------



  • 15.  RE: Planning Analytics 3.x on-prem

    Posted Thu February 12, 2026 09:37 AM

    Hi Nicolas, 

    The Client ID and Secret you are getting for the default TM1 service instance at the end of the installation of TM1 v12 you need to set up your DEFAULT environment in PAL/PAW.

    I trust you managed to get your OIDC set up in PA, there  are some gotchas there too that aren't documented well, let me know if you didn't get passed that yet and I'll tell you how to patch up the PA side to make that run as well.

    Presuming you did successfully manage to get OIDC configured and get into PAW you'll have to go to Administration, then Environments and then go and update the settings for you DEFAULT environment. This is where you'll put that URL you got from the TM1 v12 installation in the Service Instance URL field, followed by the Client ID and Client Secret.

    I've no hands on experience using Proxmox myself but depending on how/where you ended up setting up PA and TM1 v12 you might need to replace 'localhost' in the URL you got from the TM1 v12 installation and, depending on setup, replace it with something environment specific and/or use IP addresses and make sure traffic is allowed to route that way.
    Keep in mind that rootless Podman containers do not share the host's network namespace, so `localhost` will not work. In such a case, presuming you'd be using Podman 3.3 or higher, you'd have to use the special hostname `host.containers.internal` instead.

    Hope this helps,



    ------------------------------
    Hubert Heijkers
    STSM, Program Director TM1 Functional Database Technology and OData Evangelist
    ------------------------------



  • 16.  RE: Planning Analytics 3.x on-prem

    Posted Thu February 12, 2026 10:18 AM

    Hello Hubert,

    I didn't configure OIDC, I don't want. I just want to do a simple test as the documentation isn't full yet and I need to understand the global set of v12.
    I saw in PAW 3.1 we still have the possibility to configure a tm1 server as authentication provider so I try like this. But we get many ports use by PAL v12 and in v11 we had to set a HTTPPortNumber of ONE database in PAW to provide authentication but I still didn't found how/where manage my databases with v12.

    Thank you

    Nicolas



    ------------------------------
    Nicolas LACOSSE
    ------------------------------



  • 17.  RE: Planning Analytics 3.x on-prem

    Posted Thu February 12, 2026 10:20 AM

    This is the paw.env file from my PA Local 3.1 deployment.  

    export ENABLE_VIEW_EXCHANGE="true"
    export ENABLE_MULTI_ENV="true"
    export ENABLE_AUDIT_LOGGING="true"
    ### OIDC ...
    export PAAuthMode="oidc"
    export OIDC_CLIENT_ID="waMPwwEp937nKJgk63VFC2pBzUDzU0PAsOslobXM"
    export OIDC_CLIENT_SECRET="****************************************************************************************************************"
    export OIDC_REDIRECT_URI="http://palocal31/login"
    export OIDC_ISSUER="http://192.168.68.69:9000/application/o/planning-smaa/"
    export OIDC_DISPLAY_NAME_CLAIM="preferred_username"
    export OIDC_LOGIN_ID_CLAIM="preferred_username"

    Two important notes:

    1 - I use Authentik (https://goauthentik.io/) for OIDC

    2 - The connection to the TM1 12 instance is configured in the Environments tile in Planning Analytics Administration.  You can deploy, configure, and log into Workspace 3.1 without a TM1 12 instance.  



    ------------------------------
    Stuart King
    Product Manager
    IBM Planning Analytics
    ------------------------------



  • 18.  RE: Planning Analytics 3.x on-prem

    Posted Thu February 12, 2026 11:01 AM
    Edited by Hubert Heijkers Thu February 12, 2026 11:03 AM

    Hi Nicolas,

    The configuration that Stuart describes, works with goauthentic.io but might not work other OIDC providers, most notably because you can't control the 'audience' parameter. As such IMHO the preferred way is to, instead of exporting individual OIDC_ variables representing the various OIDC configuration properties, exporting only the OIDC_CONFIG variable containing a JSON object with all the properties required for your OIDC configuration. Here is an example:

    export PAAuthMode="oidc"
    export OIDC_CONFIG='{ "issuer": "<your-OIDC-issuer-URI>", "discoveryURL": "<your-OIDC-issuer-discoverly-URL>", "clientId": "<YourClientId>", "clientSecret": "<YourClientSecret>", "audience": "<audience-as-specified-with-your-provider>", "redirectURI": "http://localhost:8080/login", "loginIdClaim": "", "displayNameClaim": "" }'

    Please be aware that the redirectURI must match one of the allowed callback URLs registered with your OIDC provider. If you intend to make your local PA available to other machines then the one you are installing on then you'll have to update the redirectURI accordingly.

    discoveryURL is optional if it conforms to the OIDC specification and matches <your-oidc-issuer-URI>/.well-known/openid-configuration.

    Also note that if you don't specify an audience PA uses <YourClientId> instead (and this one specifically you can only overwrite using this OIDC_CONFIG variable.

    However, all OIDC configuration variables being exported in the paw.env script need to be made available to the root container. Whilst all the OIDC_* variables Stuart mentions are, the OIDC_CONFIG is not so we have to add this variable, optionally removing all the other OIDC_xxx variables as they are no longer needed, to the list of variables in the docker compose file.
    To do so simply edit the docker-compose.yml file, remove all the OIDC_* variables and replace them with only OIDC_CONFIG.

    Just for completeness, hoping you don't need it but will help others that might run into OIDC configuration issues who find this discussion.



    ------------------------------
    Hubert Heijkers
    STSM, Program Director TM1 Functional Database Technology and OData Evangelist
    ------------------------------



  • 19.  RE: Planning Analytics 3.x on-prem

    Posted Tue February 17, 2026 06:17 AM

    Hello,

    There are a lot of frustrating things about this V12 issue, whether it's the containerized version or the non-containerized Technical Preview.

    I find there's a serious lack of documentation, especially simple instructions. I usually have no trouble configuring OpenID on other applications because the documentation clearly specifies-depending on the provider-which values ​​to use and where to enter them.

    Here, if I refer to the IBM documentation:

    "The callback URL must be registered with your OIDC provider as http(s)://<host>:<port>/auth/v1/oidc/callback"

    Except that V12 seems to use about ten ports: how are we supposed to identify the correct port for the callback?

    According to Stuart's example, for PAW, we simply redirect to port 80 (or 443), which makes sense since it's already the PAW "front" entry point. But from what I understand, the OIDC needs to be configured on both the PAL and PAW sides.

    So, which port should I use for PAL? Currently, I can't find the information I need to proceed smoothly.

    I understand that this is a preview and that everything isn't perfect, but there should still be the minimum requirements to allow for smooth progress on important configurations. Furthermore, we're somewhat confused internally because version v12 CC seems to have been fully released, and we don't really understand why the non-containerized version needs a technical preview.

    Thanks for your help on this subject.
    Nicolas



    ------------------------------
    Nicolas LACOSSE
    ------------------------------



  • 20.  RE: Planning Analytics 3.x on-prem

    Posted Tue February 17, 2026 09:14 AM

    Hi Nicolas,

    There is no V12 issue here, you ONLY configure OIDC for PA!

    PA, like in PAaaS, PA on CP4D, PACC, on local connects to TM1 v12 using service to service authentication, for which you specify the credentials, the Client-ID and Secret you got at the end of the installation of TM1 v12, in the DEFAULT environment in PAA.

    I.o.w. after you install both PAL and TM1 v12, configured PAL and successfully logged into PA, the only remaining thing to do is edit the properties for your DEFAULT environment, the URL and Client-ID and Client-Secret for the TM1 v12 service, you are done!

    Let me know where you get stuck and happy to jump on a call and go through things, just reach out to me on e-mail.

    Cheers,  



    ------------------------------
    Hubert Heijkers
    STSM, Program Director TM1 Functional Database Technology and OData Evangelist
    ------------------------------



  • 21.  RE: Planning Analytics 3.x on-prem

    Posted Tue February 17, 2026 12:29 PM

    Hi Hubert,

    I finally understood that the expected elements in the PAW environments' administration panel corresponded to the PAL outputs at the end of the installation.

    When trying to create a new database from the administration screen, I initially got a fairly generic error. Trying again a few moments later, it worked. I don't really understand why, but the problem seems to have disappeared.

    My "basic" deployment of the non-containerized V12 is therefore complete for the time being.

    However, I've noticed that despite the automatic user import setting in PAW being set to True, members of my organization are getting an error when logging into PAW. I have to manually create their user, using the ID displayed in the error message.

    The next step, I think, is:

    • for the consultants: start testing the platform,
    • for me: write documentation,
    • then test the containerized version of V12 - but that will be the next step.

    Thank you

    Nicolas



    ------------------------------
    Nicolas LACOSSE
    ------------------------------



  • 22.  RE: Planning Analytics 3.x on-prem

    Posted Tue February 17, 2026 06:59 AM
    Edited by Nicolas LACOSSE Tue February 17, 2026 06:59 AM

    For those who use Azure here an example of configuration:

    export OIDC_CLIENT_ID="<client_id>"
    export OIDC_CLIENT_SECRET="<client_secret>"
    export OIDC_REDIRECT_URI="https://url_to_paw/login"
    export OIDC_ISSUER="https://login.microsoftonline.com/<tenant_id>"
    export OIDC_LOGIN_ID_CLAIM="preferred_username"
    export OIDC_DISPLAY_NAME_CLAIM="preferred_username"

    I still trying to check how to configure PAL OIDC.

    Thank you

    Nicolas



    ------------------------------
    Nicolas LACOSSE
    ------------------------------



  • 23.  RE: Planning Analytics 3.x on-prem

    Posted Thu February 12, 2026 09:14 AM

    Hi Nicolas, glad to hear you worked it out in the meantime, I had asked around as well as its actually not TM1 that needs AVX but rather MongoDB, hence the reference to "datastore", which in the non containerized installation of TM1 v12 is yet another (micro-)service managed by TM1 daemon service (tm1sd) as well.



    ------------------------------
    Hubert Heijkers
    STSM, Program Director TM1 Functional Database Technology and OData Evangelist
    ------------------------------



  • 24.  RE: Planning Analytics 3.x on-prem

    Posted Thu February 19, 2026 03:01 PM

    Hi all

    is there any guidance regarding HTTP pass-through authentication? We got stuck with the connecting PAL to TM1 v12.

    We managed to log on to PAW with http authentication to TM1v11 server.

    We set up TLS for PAW, PA server.

    We added HTTP pass-through authentication to config.internal.json.

    When we specify the credentials, the Client-ID and Secret we got at the end of the installation of TM1 v12, in the DEFAULT environment in PAA we get an error: "Connection test rejected"

    WAProxy log:

     response: {
        statusCode: 500,
        headers: {
          'content-type': 'application/json',
          'content-language': 'en-US',
          'content-length': '89',
          connection: 'Close',
          date: 'Thu, 19 Feb 2026 12:30:52 GMT'
        },
        body: [
          '{',
          '  "error": "HttpHostConnectException",',

    Thanks for your help on this subject.

    Peter



    ------------------------------
    peter kolesar
    ------------------------------



  • 25.  RE: Planning Analytics 3.x on-prem

    Posted Fri February 20, 2026 05:51 AM

    Hi Peter,

    Sounds like you are trying to connect a PAL 2.x to a TM1 v12 service, that's not supported by PA (TM1 doesn't know nor care).

    I.o.w. you need a PAL 3.1 installation, configure it to use your OIDC provider, and then, as described above, point your DEFAULT environment in PAA to your TM1 v12 service instance using the details you got at the end of the installation of your TM1 v12 service.



    ------------------------------
    Hubert Heijkers
    STSM, Program Director TM1 Functional Database Technology and OData Evangelist
    ------------------------------



  • 26.  RE: Planning Analytics 3.x on-prem

    Posted Mon March 16, 2026 05:24 PM

    Deployment Report: OpenShift and IBM Cloud Pak for Data

    We attempted to deploy PA Certified Container (OpenShift and IBM Cloud Pak for Data) and encountered several roadblocks. Being completely new to these technologies, the following analysis outlines the numerous obstacles encountered and highlights the significant gaps and shortcomings we found between the official documentation and the reality of a test or small business environment.

    This experience allowed me to identify the heavy dependency chain of this deployment: Planning Analytics Certified Containers seems to require the Cloud Pak for Data environment, which itself depends on the ZenService foundation on OpenShift.

    However, it is clear that not all clients can afford or equip themselves with a complete OpenShift infrastructure, which represents a significant cost and requires advanced skills. It is unreasonable to expect all clients to deploy or regularly use OpenShift (due to cost or security concerns).

    Issue 1: local-storage StorageClass not created

    • Context: The initial space was sufficient for the basic OpenShift pods but not for Planning Analytics (PA), so I had to add more space.

    • My solution: I deleted the strict LocalVolume in favor of a Local Volume Set to force automatic disk detection.

    • The IBM Flaw (Silent failures): The interface and documentation suggest that the creation was successful ("Succeeded" status), without warning me that the operator refused to create the class in the background.

    Issue 2: Blocking Node Selector configuration

    • Context: To host the TM1 databases of Planning Analytics (which are very RAM-intensive), it was necessary to precisely target the correct compute nodes (workers).

    • My solution: I had to manually configure the kubernetes.io/os key with the In operator and the linux value.

    • The IBM Flaw (Lack of examples): The installation interface requires very specific node selectors, but the documentation does not provide the standard values to use for a basic environment.

    Issue 3: No volume (PV) generated (non-empty disk)

    • Context: Planning Analytics needs to dynamically create multiple distinct volumes to separate its logs, configuration, and TM1 databases.

    • My solution: I abandoned the IBM Local Storage operator and deployed the local-path-provisioner tool to simulate dynamic storage.

    • The IBM Flaw (The illusion of local disk): The documentation assumes I own an enterprise storage array. By using a single 300 GB physical disk, the first pod consumed all the space, permanently blocking the rest of the installation.

    Issue 4: ZenService installation failed on the old storage

    • Context: The service foundation (ZenService) is a mandatory prerequisite to run the Planning Analytics web administration interface (PA Workspace).

    • My solution: I modified the lite-cr YAML file to point to the new storage (local-path) and manually deleted the blocking user-home-pvc disk.

    • The IBM Flaw (Inheritance issue): Changing the main configuration does not update the sub-components. The documentation does not explain that you have to manually clean up the old disks for the installer to understand the change.

    Issue 5: zen-metastoredb database stuck in Pending

    • Context: This internal database is essential for managing users, rights, and metadata in the Planning Analytics environment.

    • My solution: I deleted the StatefulSet component and its old disks (PVCs) to force a clean reinstallation.

    • The IBM Flaw (Inability to self-heal): The system does not know how to correct its own path errors. The documentation omits the critical "purge" procedure needed to unblock a frozen component.

    Issue 6: Global status stuck at 51%

    • Context: The installation of the base Cloud Pak foundation remained stuck halfway, preventing the orchestrator from launching the final Planning Analytics deployment.

    • My solution: I forcefully restarted the supervisor pod cpd-platform-operator-manager to refresh the state.

    • The IBM Flaw (Loss of synchronization): The main orchestrator loses track of the installation if there are manual modifications, and the documentation provides no "reset button" to revalidate the status.

    Issue 7: Massive return of pods to Pending status

    • Context: The final containers, essential for the operation of Planning Analytics TM1 cubes, could never initialize due to a lack of stable disk access.

    • My solution: Stopped investigations at this stage. The local-path storage could no longer write to my Proxmox hypervisor.

    • The IBM Flaw (Infrastructure elitism): IBM assumes that a dynamic, robust, and perfectly configured storage system (NFS, Ceph, ODF) is already in place. No alternative or clear workaround procedure is provided for test labs or restricted environments.

    Would it be possible to get feedback on the exact state of the OpenShift stack and the various expected services to be able to deploy PA CC as simply as possible?

    In any case, even if these are prerequisites, it will be necessary to be able to provide clients with this highly precise information.



    ------------------------------
    Adan Sourou
    ------------------------------