Skip to content

bb-io/GoogleVertexAI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

39 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Blackbird.io Google Vertex AI

Blackbird is the new automation backbone for the language technology industry. Blackbird provides enterprise-scale automation and orchestration with a simple no-code/low-code platform. Blackbird enables ambitious organizations to identify, vet and automate as many processes as possible. Not just localization workflows, but any business and IT process. This repository represents an application that is deployable on Blackbird and usable inside the workflow editor.

Introduction

Vertex AI is a comprehensive platform offering access to powerful multimodal models like Gemini from Google, enabling developers to seamlessly combine various inputs such as text, images, video, or code. With a diverse selection of models, Vertex AI facilitates easy customization and integration, allowing for the development and deployment of AI applications. The platform provides generative AI models, fully managed tools, and purpose-built MLOps solutions to streamline the entire machine learning lifecycle—from training and tuning to deployment and monitoring.

Before setting up

Before you can connect you need to make sure that:

Creating service account and generating JSON keys

  1. Navigate to the selected or created Cloud Platform project.
  2. Go to the IAM & Admin section.
  3. On the left sidebar, select Service accounts.
  4. Click Create service account.
  5. Enter a service account name and, optionally, a description. Click Create and continue. Select the Vertex AI Administrator role for the service account and click Continue. Finally, click Done.
  6. From the service accounts list, select the newly created service account and navigate to the Keys section.
  7. Click Add key => Create new key. Choose the JSON key type and click Create.
  8. Open the downloaded JSON file and copy its contents, which will be used in the Service account configuration string connection parameter.

Connecting

  1. Navigate to apps and search for Google Vertex AI. If you cannot find Google Vertex AI then click Add App in the top right corner, select Google Vertex AI and add the app to your Blackbird environment.
  2. Click Add Connection.
  3. Name your connection for future reference e.g. 'My organization'.
  4. Fill in the JSON configuration string obtained in the previous step.
  5. Click Connect.
  6. Confirm that the connection has appeared and the status is Connected.

Connecting

Note: Currently the app works for models stored in the location us-west1. If you have other requirements please let us know!

Actions

  • Generate text with Gemini generates text using Gemini model. If text generation is based on a single prompt, it's executed with the gemini-1.0-pro model. Optionally, you can specify an image or video to perform generation with the gemini-1.0-pro-vision model. Both image and video have a size limit of 20 MB. If an image is already present, video cannot be specified and vice versa. Supported image formats are PNG and JPEG, while video formats include MOV, MPEG, MP4, MPG, AVI, WMV, MPEGPS, and FLV. Optionally, set Is Blackbird prompt to True to indicate that the prompt given to the action is the result of one of AI Utilities app's actions. You can also specify safety categories in the Safety categories input parameter and respective thresholds for them in the Thresholds for safety categories input parameter. If one list has more items than the other, extra ones are ignored.

  • Get Quality Scores for XLIFF file Gets segment and file level quality scores for XLIFF files. Optionally, you can add Threshold, New Target State and Condition input parameters to the Blackbird action to change the target state value of segments meeting the desired criteria (all three must be filled).

    Optional inputs:

    • Prompt: Add your criteria for scoring each source-target pair. If none are provided, this is replaced by "accuracy, fluency, consistency, style, grammar and spelling".
    • Bucket size: Amount of translation units to process in the same request. (See dedicated section)
    • Source and Target languages: By defualt, we get these values from the XLIFF header. You can provide different values, no specific format required.
    • Threshold: value between 0-10.
    • Condition: Criteria to filter segments whose target state will be modified.
    • New Target State: value to update target state to for filtered translation units.

    Output:

    • Average Score: aggregated score of all segment level scores.
    • Updated XLIFF file: segment level score added to extradata attribute & updated target state when instructed.
  • Post-edit XLIFF file Updates the targets of XLIFF file

    Optional inputs:

    • Prompt: Add your linguistic criteria for postediting targets.
    • Bucket size: Amount of translation units to process in the same request. (See dedicated section)
    • Source and Target languages: By default, we get these values from the XLIFF header. You can provide different values, no specific format required.
    • Glossary
  • Process XLIFF file given an XLIFF file, processes each translation unit according to provided instructions (default is to translate source tags) and updates the target text for each unit.

Note, that all XLIFF actions supports 1.2 and 2.1 versions of the XLIFF format, since these versions are the most commonly used in the industry. If you have a different version, please let us know and we will consider adding support for it.

Bucket size, performance and cost

XLIFF files can contain a lot of segments. Each action takes your segments and sends them to the AI app for processing. It's possible that the amount of segments is so high that the prompt exceeds the model's context window or that the model takes longer than Blackbird actions are allowed to take. This is why we have introduced the bucket size parameter. You can tweak the bucket size parameter to determine how many segments to send to the AI model at once. This will allow you to split the workload into different API calls. The trade-off is that the same context prompt needs to be send along with each request (which increases the tokens used). From experiments we have found that a bucket size of 1500 is sufficient for models like gpt-4o. That's why 1500 is the default bucket size, however other models may require different bucket sizes.

Feedback

Do you want to use this app or do you have feedback on our implementation? Reach out to us using the established channels or create an issue.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages