The fp_associativity_analyzer package is a tool designed to help developers and researchers analyze their text inputs regarding floating-point arithmetic or SIMD programming challenges and identify potential issues related to associativity. Utilizing advanced language models, it provides a structured analysis that extracts key points and insights, ensuring consistent formatting and reliable outputs. This aids in understanding scenarios where floating-point non-associativity could lead to unexpected results, especially in high-performance computing contexts.
pip install fp_associativity_analyzerHere's an example of how to use the package in Python:
from fp_associativiity_analyzer import fp_associativity_analyzer
response = fp_associativity_analyzer(
user_input="Explain the potential associativity issues in SIMD loop optimizations.",
api_key="your_api_key_here" # Optional if you want to provide your own API key
)
print(response)- user_input (str): The text describing the floating-point or SIMD challenge to analyze.
- llm (Optional[BaseChatModel]): An instance of a language model. Defaults to the internal
ChatLLM7. - api_key (Optional[str]): Your API key for LLM7. If not provided, will be obtained from the environment variable
LLM7_API_KEY.
This package uses ChatLLM7 from langchain_llm7 by default. You can safely pass your own language model instance for customization, such as:
- OpenAI GPT models
- Anthropic Claude
- Google PaLM
Using OpenAI:
from langchain_openai import ChatOpenAI
from fp_associativiity_analyzer import fp_associativity_analyzer
llm = ChatOpenAI()
response = fp_associativity_analyzer(
user_input="Describe non-associative floating-point operations.",
llm=llm
)
print(response)Using Anthropic:
from langchain_anthropic import ChatAnthropic
from fp_associativiity_analyzer import fp_associativity_analyzer
llm = ChatAnthropic()
response = fp_associativity_analyzer(
user_input="How does floating-point non-associativity affect parallel computations?",
llm=llm
)
print(response)Using Google PaLM:
from langchain_google_genai import ChatGoogleGenerativeAI
from fp_associativiity_analyzer import fp_associativity_analyzer
llm = ChatGoogleGenerativeAI()
response = fp_associativity_analyzer(
user_input="What are the implications of floating-point non-associativity?",
llm=llm
)
print(response)The default usage of LLM7 in this package is configured for the free tier, which provides sufficient rate limits for most use cases. For higher rate limits, you can obtain a free API key from LLM7 Token Service and supply it via the environment variable LLM7_API_KEY or directly in the function call.
For issues, bug reports, or feature requests, please visit the GitHub repository:
https://github.com/chigwell/fp-associativity-analyzer
Eugene Evstafev
Email: [email protected]
GitHub: @chigwell