✨ Access to our full platform at a special price ✨

Animated Text Module

The AI Image Generation Platform for Start-Up Founders & Tech Innovators

Explore Bria's Essential Building Blocks for AI Images

Animated Image Module
BRIA_LP_Ecommers3-05-1

Build Controllable & Predictable AI Image Generation Directly Into Your App

As a tech founder or product leader, you’re constantly seeking ways to integrate innovative AI features that enhance user experience and deliver value. Bria.ai empowers you to incorporate advanced, on-brand image generation models directly into your product, offering your users cutting-edge AI functionality while keeping development streamlined and efficient.

Special Offer for Startups*

US$700/Month Platform Fee
Includes 5000 Actions/API calls per month 

 

Build and incorporate pipelines into your existing workflow

curated-open-content
Curated AI Components

Deliver high-quality, on-brand visuals at scale without the hassle of building AI from scratch.

 

Component List
open-source
Open Source Models

Tailor the platform to your specific needs and contribute to ongoing innovation.

 

Browse Hugging Face
api-1
Customizable & Developer-Friendly API

Bria's API option enables quick integration of AI image features, build custom pipelines to your needs.

Get API Key
product
Scale Without Complexity

Scale effortlessly with Bria's infrastructure, allowing you to focus on product development while we handle image generation.

Explore Models
startup new LP-2-1

Respecting Copyrights and Privacy

Bria AI envisions a future of fair compensation for creators. Our patented attribution engine ensures legal compliance, and tracks usage so creators are paid per their impact on the image generated, counted as a model action. This is key to a sustainable creative ecosystem.

Bria's attribution engine enables licensing the content for every image you create. Our exclusive plan includes content licensing for up to 1,000 model actions. Beyond this cap, usage fees apply (US$0.005-0.04) to cover additional content licensing costs.

Special Offer for Startups*

US$700/Month Platform Fee
Includes 5000 Actions/API calls per month 

 

Bria AI

Responsible & Open
Platform

Trained on 100%
licensed data

Customize models
to make them your own

Model source code
and weights available

Explore Bria's foundation models

Bria AI stands at the forefront of responsible and open Visual Generative AI. Our platform offers a rich array of foundation models, providing both the source code and weights, alongside an array of tools including APIs, SDKs, and accessible no-code/low-code components. This suite is crafted to facilitate the responsible, legal, and scalable development of visual AI applications.

We invite you to explore and freely evaluate the potential of our models:

research (2)
Compare

Access performance benchmarking of leading commercial text-to-image models.
Learn more >

 

api (3)
Test

Use our APIs to test our models' capabilities or build prototypes.
Learn more >

 

photo-editing (3)
Envision

Explore our reference WebApp and iFrame to inspire your creations.
Learn more >

 

Frequently Asked Questions

Everything you never knew you wanted to know about building on top of Bria.

Are the models compatible with usual diffusers-library & fine-tuning methods?

Yes, BRIA's models are compatible with usual diffusers-library, LoRA (Low-Rank Adaptation) and fine-tuning methods. BRIA provides advanced customization options and access through platforms such as Hugging Face, offering features like Fast LoRA and ControlNets, which facilitate fine-tuning and specialized model adjustments. These capabilities allow researchers to integrate and fine-tune BRIA's models effectively within their workflows, leveraging the diffusers library for tasks such as image generation and modification.

In the Bria Platform, you have the ability to train Lora models easily, e.g. Tailored/Branded Generation

What are the licensing terms and usage rights for the source code?

The licensing terms and usage rights for the Bria API and its associated tools are detailed within the provided documentation. Here are the key points regarding licensing and commercial use:

Commercial Use:

  • The API and its tools can be used for generating and customizing images, including creating tailored models and enhancing product imagery for e-commerce platforms.
  • There are no explicit restrictions mentioned on commercial use within the provided sections of the document. 

Licensing Terms:

  • Access to the API requires registration and obtaining an API token, which is associated with the organization.
  • There are different routes and methods for utilizing the API, including endpoints for image generation, model training, and dataset management.
  • Specific licensing terms for accessing underlying model source codes via platforms like Hugging Face are referenced in documentation, additional features and advanced control over image generation.

Usage Rights:

  • Users can generate, train, and manage their own tailored models.
  • The API supports a variety of image formats (JPEG, PNG) and provides various functionalities such as image upscaling, background replacement, and quality predictions.
  • The documentation includes guidelines on how to handle image uploads, dataset creation, and model management, ensuring compliance with the API's supported formats and constraints.

For specific details and any potential limitations, it's recommended to review the entire licensing section of the API documentation or contact Bria support directly at support@bria.ai. ​

What kind of data is required for fine-tuning the model?

Fine-tuning a model generally requires a carefully curated dataset that aligns with the specific use case for which you are fine-tuning the model. Here are the key points regarding the data required for fine-tuning:

  1. High-Quality Data: The data should be of high quality, free of errors, and representative of the task at hand.
  2. Diverse Examples: The dataset should include a diverse set of examples to cover various aspects of the task, ensuring the model can generalize well.
  3. Task-Specific Data: The data should be relevant to the specific application.
  4. Balanced Data: It should be balanced to avoid bias.
  5. Annotated Data:Depending on the task, the data may need to be annotated. 

Within the Bria Console under Tailored Generation: https://platform.bria.ai/console/tailored-generation/models click on Dataset Best Practices for further guidance. 

Other Key Considerations:

  • It is critical to train on data that you have the explicit right to use or are licensed to use. 
  • Multi-modal models require high quality multi-modal inputs.  For example, for text to image the quality of the captions as well as the images are important.  If including a certain color in a certain area (e.g. a purple face) examples of purple faces in the training images and the captions as well within the prompts.
  • The textual elements of training must align to the model’s encoder capabilities (CLIP allows for a maximum of 77 tokens).
  • Each use case will have its own best practices because our capabilities provide the flexibility to work across many use cases and enable refinement of as many models as needed.
Has the model been trained only using proper artist attribution and royalty payments?

Bria guarantees its developers that there are no legal or privacy issues. Establishing a new standard in AI research and development, the platform offers a comprehensive range of licensed models, including both source code and weights, along with a robust toolkit to expedite and enhance the research process.

Our exclusive attribution engine: Our innovative attribution engine compensates data contributors and creators based on their visual impact on each visual generation, fostering a sustainable economy.

What is the architecture of the model, can it be easily modified to fit your specific use case?

For developers looking to dive deeper into customization, accessing BRIA's models directly through Hugging Face opens up a world of possibilities. This option grants access to the model's source code, unlocking additional features such as Bria 2.3 Fast LoRA and ControlNets: Canny , Depth, and ReColoring. This feature is perfect for developers looking to have more control over the image generation process and for those interested in seamlessly integrating state-of-the-art AI into their workflows. Furthermore, our APIs offer a wide range of flexibility and customization options to accommodate specific use cases.

From an API point of view,  Here are relevant architecture details:

Architecture Overview

Dataset Management: The model architecture allows for comprehensive dataset management, including:
  • Creating and Retrieving Datasets: Using endpoints to manage datasets.
  • Uploading and Retrieving Images: Managing images within these datasets.
  • Managing Individual Images: Getting or deleting specific images from a dataset.
Model Management: This involves the creation and management of models based on the datasets:
  • Creating and Retrieving Models: Using specific endpoints to handle models.
  • Getting and Deleting Specific Models: Managing models by their ID.
Training Management: Managing the training lifecycle of models:
  • Start Training: Initiating the training process once the dataset meets specific criteria.
  • Stop Training: Stopping an ongoing training job.
Generating Images: Once trained, the models can be used to generate images:
  • Base Model Generation: Allows generating images from textual prompts.
  • Tailored Generation: Enables creating images using custom-trained models for unique visual styles or specific use cases.

Customizability

The model architecture offers extensive customization options, allowing for the development of personalized models that can be trained with your unique datasets. Here's how it can be adjusted to suit specific use cases:

Training Tailored Models:
  • Create a dataset and upload images.
  • Use the dataset to train a new model.
  • Initiate and monitor the training process.
  • Generate images using the newly trained model.
Advanced Customization:
  • Developers can access models directly through platforms like Hugging Face for deeper customization.
  • This includes features like Bria 2.3 Fast LoRA and ControlNets for advanced control over image generation processes.
Integration and API Access:
  • The Bria API provides endpoints for creating, managing, and deploying models.
  • The API supports a range of parameters for generating images, including prompts, number of iterations, aspect ratios, and more.
E-commerce Solutions:
  • Specific tools for creating product cutouts, packshots, and lifestyle images tailored for e-commerce platforms.

Detailed Customization Steps

Create a Dataset: Use the /datasets endpoint.
Upload Images: Use the /datasets/{dataset_id}/images endpoint.
Create a Model: Use the /models endpoint.
Complete the Dataset: Ensure it meets the minimum image requirements.
Start Training: Initiate training using the /models/{id}/start_training endpoint.
Generate Images: Once trained, use the /text-to-image/tailored/{model_id} route for generating images.

This architecture and customizability framework provide a flexible solution for various use cases, allowing users to create models tailored to their specific needs and integrate advanced image generation capabilities into their workflows

What are the performance metrics of the model on standard benchmarks?

We propose a framework that evaluates various criteria and dimensions, utilizing a systematic, unbiased, and statistically grounded evaluation methodology. By using this framework, we conducted a comprehensive analysis of leading Text-to-Image models developed for commercial purposes up to March 2024. The aim of this framework and comparison of different models is to aid global brands in selecting suitable T2I models and platforms that align with their business objectives, operational constraints, and technological ambitions.

Additionally, our model's capabilities and quality prediction functionalities provide an insight into its performance.

Key Features and Performance Indicators

  1. Image Quality:
    • The model supports generating high-quality, photorealistic, and artistic images with resolutions up to 1920x1080 pixels for HD models and up to 1024x1024 pixels for fast models.
    • It includes advanced customization features like ControlNets for specific image enhancements such as Canny, Depth, and ReColoring, providing developers with more control over image quality and detail​​ .
  2. Quality Predictions:
    • The API includes an "Oracle" capability that predicts the quality of results for various actions like creating, removing objects, and background operations. This feature enhances user experience by ensuring optimal outcomes and minimizing errors during the image generation process​​.
  3. Advanced Customization and Access:
    • Developers can access BRIA's models directly through Hugging Face, allowing for deeper customization and integration of cutting-edge AI features into workflows. This includes additional features like Bria 2.3 Fast LoRA and ControlNets​​.
  4. Training and Generation:
    • Training a tailored model takes between 1-3 hours, indicating efficient processing times for model training.
    • The model allows for generating images using tailored models trained on specific datasets, enhancing the ability to create unique visual styles and content​​ .
  5. E-commerce Solutions:
    • Specialized features for e-commerce applications include creating product cutouts, packshots, and lifestyle images, ensuring high-quality imagery for online stores and product catalogs​​.
What are the computational resources required for training and fine-tuning the model?

Training and Fine-Tuning

  1. Training Duration:
    • Training a tailored model typically takes between 1 to 3 hours.
  2. Steps for Training:
    • Create a Dataset: Use the /datasets endpoint.
    • Upload Images: Upload your images to the created dataset using the /datasets/{dataset_id}/images endpoint.
    • Create a Model: Use the /models endpoint to create a model associated with your dataset.
    • Complete the Dataset: Ensure the dataset meets the minimum image requirements and has a status of 'completed'.
    • Start Training: Use the /models/{id}/start_training endpoint to initiate the training process.
    • Monitor Training: Optionally check the training status to monitor progress.
    • Generate Images: Use the /text-to-image/tailored/{model_id} route to generate images once the model is trained​​ .

Hardware Requirements

Given the nature of the tasks (image generation and model training), the following hardware recommendations can be inferred:

  1. GPU:
    • A powerful GPU is recommended to handle the computational load for training and generating high-resolution images efficiently.
    • NVIDIA GPUs with CUDA support are typically preferred for deep learning tasks.
  2. CPU:
    • A multi-core CPU to manage data preprocessing and other operations that are not GPU-accelerated.
  3. Memory (RAM):
    • Sufficient RAM (at least 16GB, preferably 32GB or more) to manage large datasets and training processes.
  4. Storage:
    • SSD storage is recommended for faster data access and model checkpointing during training.

Software Requirements

  1. Operating System:
    • The system should be running a modern operating system compatible with the software stack (e.g., Linux, Windows, macOS).
  2. Python Environment:
    • Python is the primary programming language used for interacting with the Bria API.
    • Ensure that Python and the required libraries (such as TensorFlow or PyTorch, depending on the underlying model framework) are installed.
  3. API Access:
    • Access to the Bria API requires an API token, which can be obtained by registering on the Bria platform​​ .

To effectively train and fine-tune the model, a powerful computational setup is recommended, including a strong GPU, sufficient RAM, and SSD storage. The training process involves several steps, starting from dataset creation to image generation, and can be managed through the Bria API or platform. For software, a modern OS and Python environment with necessary libraries are essential​​ 

What kind of support and documentation is available for the model?
How scalable is the model?

The performance metrics of the model on standard benchmarks are not explicitly detailed in the provided documentation. However, the model's capabilities and quality prediction functionalities are highlighted, which can provide an insight into its performance.

 

Key Features and Performance Indicators

  1. Image Quality:
    • The model supports generating high-quality, photorealistic, and artistic images with resolutions up to 1920x1080 pixels for HD models and up to 1024x1024 pixels for fast models.
    • It includes advanced customization features like ControlNets for specific image enhancements such as Canny, Depth, and ReColoring, providing developers with more control over image quality and detail​​ .
  2. Quality Predictions:
    • The API includes an "Oracle" capability that predicts the quality of results for various actions like creating, removing objects, and background operations. This feature enhances user experience by ensuring optimal outcomes and minimizing errors during the image generation process​​.
  3. Advanced Customization and Access:
    • Developers can access BRIA's models directly through Hugging Face, allowing for deeper customization and integration of cutting-edge AI features into workflows. This includes additional features like Bria 2.3 Fast LoRA and ControlNets​​.
  4. Training and Generation:
    • Training a tailored model takes between 1-3 hours, indicating efficient processing times for model training.
    • The model allows for generating images using tailored models trained on specific datasets, enhancing the ability to create unique visual styles and content​​ .
  5. E-commerce Solutions:
    • Specialized features for e-commerce applications include creating product cutouts, packshots, and lifestyle images, ensuring high-quality imagery for online stores and product catalogs​​.
What measures are in place to ensure the security and privacy of the data used for training and inference?

The Bria platform ensures the security and privacy of data used for training and inference through several mechanisms, as outlined in the API documentation. Here are the key measures in place:

  1. Data Access Control
  • API Token: Every request to the Bria API requires an API token, which is associated with the organization. This ensures that only authenticated and authorized users can access the API endpoints​​.
  • Endpoint Security: All endpoints for creating, managing, and retrieving datasets, models, and images require this API token, preventing unauthorized access to sensitive data.
  1. Data Transmission
  • HTTPS: Data is transmitted over HTTPS, ensuring that all data sent between the client and the Bria servers is encrypted and secure from interception.
  1. Data Storage and Handling
  • Data Isolation: Each organization’s data is isolated to prevent cross-organization access, ensuring that datasets and models are only accessible to the organization that owns them​​.
  • Image and Dataset Management: When uploading images and creating datasets, the API allows for specific image management, including the ability to delete images and datasets securely, ensuring data is not retained longer than necessary​​.
  1. Operational Security
  • Model Management: The API provides endpoints to delete models and datasets, ensuring that data related to model training can be purged securely when no longer needed​​.
  • Training Management: The training process can be started and stopped securely using endpoints that require API token authentication, preventing unauthorized training operations​​.
  1. Error Handling and Logging
  • Robust Error Handling: The API responses include comprehensive status codes for handling errors, ensuring that any issues during data operations are logged and managed appropriately​​.
  • Internal Server Security: Internal server errors and other operational conflicts are logged and handled to ensure system integrity and security during data processing.

Bria’s API incorporates robust security measures to ensure the privacy and security of data used for training and inference. This includes stringent access control via API tokens, secure data transmission over HTTPS, isolated data storage, and comprehensive error handling and logging practices. These measures collectively help in maintaining the confidentiality and integrity of the data throughout its lifecycle.

How frequently is the model updated?

The frequency of updates to the Bria model and the process for incorporating updates and bug fixes are critical aspects to understand for maintaining the performance and reliability of the integration. Here’s a detailed overview based on the available documentation:

Update Frequency

Updates typically occur in the following contexts:

  1. Model Improvements: Regular updates to enhance the model’s performance, accuracy, and capabilities.
  2. Feature Additions: Introduction of new features and functionalities to keep the model aligned with current technological advancements.
  3. Bug Fixes: Timely updates to address any identified bugs or issues to ensure smooth operation.

Process for Updates and Bug Fixes

  1. Notification and Release Notes:
    • Users are usually notified about new updates and bug fixes through release notes, which detail the changes, improvements, and any deprecated features.
    • Release notes are typically available on the official website, within the API documentation, or through user communication channels like email.
  2. Backward Compatibility:
    • Updates are designed to be backward compatible, ensuring that existing integrations continue to function correctly without requiring immediate changes.
    • In case of major changes, detailed migration guides are provided to help users transition smoothly.
  3. Versioning:
    • The API follows a versioning system (e.g., v1, v2) to manage updates and ensure stability. Users can specify the version of the API they are integrating with to avoid disruptions.
  4. Incorporating Updates:
    • Automatic Updates: For some aspects, updates might be applied automatically, especially for underlying model improvements that do not impact the API interface.
    • Manual Updates: For changes requiring user action, such as endpoint modifications or new features, users must update their integration code accordingly. Detailed instructions are provided in the release notes.
  5. Testing and Validation:
    • It’s recommended to test updates in a staging environment before deploying them to production. This ensures that any potential issues can be identified and addressed without affecting the live system.
  6. Support and Feedback:
    • Bria provides support channels to assist with updates and address any issues users might encounter during the process.
    • User feedback is often used to inform future updates and improvements, ensuring the model evolves in line with user needs.

Example of Update Process

  1. Receive Update Notification:
    • Review the release notes sent via email or available on the API documentation page.
  2. Review Changes:
    • Identify the new features, bug fixes, and any breaking changes that might affect your integration.
  3. Update Integration:
    • If there are changes to the API endpoints or new parameters, update your integration code accordingly.
    • Test the changes in a staging environment to ensure compatibility and stability.
  4. Deploy to Production:
    • Once testing is complete, deploy the updated integration to the production environment.
  5. Monitor and Validate:
    • Monitor the system for any unexpected behavior and validate that the update has been successfully incorporated.

Bria’s models are regularly updated to improve performance, introduce new features, and fix bugs. The process for incorporating these updates involves clear communication through release notes, a versioning system to manage changes, and support for backward compatibility. Users should stay informed about updates, test changes in staging environments, and apply updates to their integrations as necessary to maintain optimal performance and reliability.

Are there tools provided to evaluate and monitor the performance of the fine-tuned model?

Tools and Frameworks for Evaluating and Monitoring the Performance of Fine-Tuned Models

  1. API Endpoints for Monitoring

The Bria API provides several endpoints that facilitate the evaluation and monitoring of models during and after the training process:

  1. Training Status Monitoring:
    • Start Training: Initiate the training process using the /models/{id}/start_training endpoint.
    • Stop Training: Halt an ongoing training process using the /models/{id}/stop_training endpoint.
    • Monitor Training: Check the training status to monitor progress and identify any issues during the training process. This can be done through regular API calls to retrieve the status of the model training.
  2. Model Management:
    • Retrieve detailed information about specific models using the /models/{id} endpoint.
    • List all models associated with the client organization using the /models endpoint.
  1. Quality Predictions

Bria includes an "Oracle" capability for quality predictions, which provides insights into the expected quality of various operations. This feature can be used to predict the quality of results for actions like creating images, removing objects, and modifying backgrounds.

  • Endpoints for Quality Predictions:
    • /create for creating new visuals.
    • /objects/remove for object removal.
    • /background/remove for background operations.

These predictions help in making informed decisions and optimizing the workflow by minimizing errors and enhancing output quality.

  1. Error Handling and Logging

Comprehensive error handling and status codes are provided for all API responses, allowing users to identify and troubleshoot issues effectively. Common status codes include:

  • 200 for successful operations.
  • 400 for bad requests.
  • 404 for not found errors.
  • 500 for internal server errors.
  1. Advanced Customization and Access through Hugging Face

For developers seeking deeper customization and advanced control over the image generation process, Bria’s models are accessible through Hugging Face. This allows for:

  • Integration of additional features such as Bria 2.3 Fast LoRA and ControlNets for specific image refinements.
  • Access to the underlying model source code for bespoke modifications and enhancements.
Is there an active community or ecosystem around the model that can provide additional resources, plugins, or extensions?
new startup4-1

Disclaimers and Limitations:

  • Please note that entering the plan is subject to a qualification process.
  • 3-month minimum commitment.
  • The attribution engine must be activated.
  • You have the right to modify, fine-tune, and host the Bria models with any cloud vendor or host on-prem for commercial use, subject to the following requirements: You cannot share, sell, rent, lease, sub-license, distribute, or lend any Bria model or product to others, in whole or in part, or host any BRIA model or products for use by others. Hosting these models for external use is only allowed if integrated into your products or services.
  • See Bria AI Terms & Conditions for more details.
 

 

About us

BRIA AI is a specialized visual generative AI platform designed to give enterprises powerful tools and access to core technologies needed to create personalized, authentic visual content at scale. This fully customizable and flexible platform seamlessly integrates into your existing tech stack and creative workflows, making it an ideal solution for brands seeking to maintain their unique brand voice across all forms of digital content.

At BRIA AI, we are committed to ethical AI practices. Our AI is trained on 100% licensed content, and we compensate artists. We provide comprehensive legal protection so you can produce commercial content at scale without copyright or trademark worries.

new startup3-1

Have questions?
We're here to help - Contact us today!

Form CTA

Program Pricing

The price will vary based on your startup stage according to the information you will provide in the submission form.
There are two price options available:
 

Early Stage (Pre-Seed, Seed)

  • Up to $10M funding
  • 1-20 employees
  • 700$ monthly Platform fee

Growth Stage (Series A, B)

  • More then $10M funding
  • 20-50 employees
  • 1800$ monthly Platform fee

General Terms

  • Minimum of 3 months commitment is required
  • Monthly 50K model execution/API calls consumption is included
  • Additional consumption will be billed according to scale:
    • 20,001-150,000 monthly calls - 0.03$ per call
    • 150,001-300,000 monthly calls - 0.01$ per call
    • More than 300,000 monthly calls - you are welcome to contact us
      We offer flexible price plans including unlimited consumption per use case - Please contact us for more details.

Eligibility Criteria

To qualify for our Visual Gen-AI Startup Program, your startup must meet the following criteria:

  • Early-stage or Early growth startups
  • Up to 50 employees
  • Developing a product leveraging visual generative AI for commercial use.

Timeline

The BRIA AI Startup Program runs for one year, offering ongoing support and resources to boost your startup's growth in visual generative AI.

Full liability coverage based on the Attribution Model

As part of our responsible and legal approach, we have partnered with data contributors and creators, and all our models are trained on legal and copyrighted data. For nurturing a sustainable economy, we built an attribution model that rewards data contributors based on their visual impact on every visual generation. For more information please visit our website Bria.ai.

We pay back to the data owners that their content is used to train the system. Every time that an image is generated, a specific set of data owners will get a reward per impact on the visual generation. The same as Spotify paying the artists when someone plays a song.  

What do you need to do to get the full liability?

In order for you to get our full liability coverage and gain the rights for commercial use when using Bria foundation models, kindly adhere to the following guidelines: 

  • Install the “Attribution Agent” that is available in Github and in the BRIA’s Platform. This agent creates the necessary log file to facilitate attribution to the exact data contributors.
  • This log file contains data we use to reward the data owners. It is a non-reversible vector, namely we can’t use it to recover the image generated or any information other than the data required for the copyright process.  Your data and privacy remains safe. 

Tailored Generation (Fine-Tuning)

  • Using the licensed models and auxility models like control nets you can fine tune the foundation models independently for any commercial goals.  Any execution of the  fine tuned model required the same attribution agent discussed above.
  • Remember that Bria Generative AI experts and support can help you in this process.
  • On top of that, the Bria Tailor generation is available to purchase on the platform.
 

Restrictions

  • Do not share, sell, rent, lease, sublicense, distribute or lend any Products to others, in whole or in part, or host Products for use by others. Hosting these models for external use is only allowed if integrated into your products and services.

  • The Attribution agent must be executed to ensure liability. Do not work around, disable, circumvent, or tamper with the Attribution Agent.

General

  • Applying to the program does not guarantee acceptance.

  • Acceptance into the program is contingent upon a formal application process and the signing of an agreement document (T&C, Model license).

  • The company reserves the right to modify the terms and scope of the program.

  • The company reserves the right to amend the terms and conditions of the startup program at any time without prior notice. This includes the right to modify, add, or remove program features, adjust eligibility criteria, alter any pricing or fees, and change the duration and scope of the program.

  • The company also holds the right to cancel the program or deny participation to any applicant at its sole discretion.

  • These measures are taken to ensure the program's adaptability and effectiveness in meeting the evolving needs of both the company and its participants. All participants will be subject to the most current version of these terms and conditions.