RedStack
https://redstack.dev
Microservices, Cloud Native and AIWed, 12 Nov 2025 22:18:47 +0000en
hourly
1 http://wordpress.com/https://s0.wp.com/i/buttonw-com.pngRedStack
https://redstack.dev
Using Reflection to Help LLMs Write Better SQL
https://redstack.dev/2025/11/12/using-reflection-to-help-llms-write-better-sql/
https://redstack.dev/2025/11/12/using-reflection-to-help-llms-write-better-sql/#respond<![CDATA[Mark Nelson]]>Wed, 12 Nov 2025 22:18:47 +0000<![CDATA[Uncategorized]]>http://redstack.dev/?p=4244<![CDATA[Getting LLMs to write good SQL can be tricky. Sure, they can generate syntactically correct queries, but do those queries actually answer the question you asked? Sometimes an LLM might give you technically valid SQL that doesn’t quite capture what … Continue reading →]]><![CDATA[
Getting LLMs to write good SQL can be tricky. Sure, they can generate syntactically correct queries, but do those queries actually answer the question you asked? Sometimes an LLM might give you technically valid SQL that doesn’t quite capture what you’re really looking for.
I wanted to experiment with the reflection pattern to see if we could get better results. The idea is simple: after the LLM generates SQL and executes it, have it reflect on whether the query actually answers the original question. If not, let it try again with the benefit of seeing both the question and the initial results.
Let me show you how this works.
Setting up the database
I used an Oracle Autonomous Database on Oracle Cloud for this experiment. First, I created a user with the necessary permissions. Connect as ADMIN and run this:
create user moviestream identified by <password>;
grant connect, resource, unlimited tablespace to moviestream;
grant execute on dbms_cloud to moviestream;
grant execute on dbms_cloud_repo to moviestream;
grant create table to moviestream;
grant create view to moviestream;
grant all on directory data_pump_dir to moviestream;
grant create procedure to moviestream;
grant create sequence to moviestream;
grant create job to moviestream;
Next, let’s load the sample dataset. Still as ADMIN, run this:
Then connect as the moviestream user and run this to load the rest of the dataset:
BEGIN
workshop.add_dataset(tag => 'end-to-end');
END;
/
This takes a few minutes to complete, after which we have a database with customer and sales data to work with.
The approach
The reflection pattern works like this:
Give the LLM the database schema and a natural language question
LLM generates SQL (v1)
Execute the SQL and get results
LLM reflects: “Does this SQL actually answer the question?”
Generate improved SQL (v2) based on the reflection
Execute v2 and provide the final answer
The key insight here is that by seeing the actual results, the LLM can judge whether it interpreted the question correctly. For example, if you ask “who are our top customers?”, the LLM might initially think “highest income” when you actually meant “highest spending”. Seeing the results helps it course-correct.
Setting up the Python environment
I used a Jupyter notebook for this experiment. First, let’s install the libraries we need:
%pip install aisuite oracledb
I’m using Andrew Ng’s aisuite for a unified interface to different LLM providers, and oracledb to connect to the database.
Now let’s import aisuite:
import aisuite as ai
Connecting to Oracle Autonomous Database
For Oracle Autonomous Database, you’ll need to download the wallet and set up the connection. Here’s how I connected:
For the LLM to write good SQL, it needs to know what tables and columns are available. Let’s write a function to introspect the schema:
def get_schema():
stmt = f'''
SELECT
utc.table_name,
utc.column_name,
utc.data_type,
utc.data_length,
utc.nullable,
utc.column_id,
ucc.comments AS column_comment,
utab.comments AS table_comment
FROM
user_tab_columns utc
LEFT JOIN
user_col_comments ucc
ON utc.table_name = ucc.table_name
AND utc.column_name = ucc.column_name
LEFT JOIN
user_tab_comments utab
ON utc.table_name = utab.table_name
ORDER BY
utc.table_name,
utc.column_id;
'''
cursor = connection.cursor()
cursor.execute(stmt)
rows = cursor.fetchall()
# Convert to one long string
result_string = '\n'.join([str(row) for row in rows])
cursor.close()
return result_string
This function queries the Oracle data dictionary to get information about all tables and columns, including any comments. It returns everything as a single string that we can pass to the LLM.
Generating SQL from natural language
Now let’s write the function that takes a natural language question and generates SQL:
def generate_sql(question: str, schema: str, model: str):
prompt = f'''
You are an SQL assistant for Oracle Database.
You create Oracle SQL statements to help answer user questions.
Given the user's question and the schema information, write an SQL
query to answer the question.
Schema:
{schema}
User question:
{question}
Respond with the SQL only. Do not add any extra characters or delimiters.
'''
response = client.chat.completions.create(
model=model,
messages=[{"role": "user", "content": prompt}],
temperature=0,
)
return response.choices[0].message.content.strip()
This function takes the question, the schema information, and the model to use. It constructs a prompt that gives the LLM the context it needs and asks for just the SQL query.
Executing SQL queries
We need a function to actually run the generated SQL:
def execute_sql(stmt):
cursor = connection.cursor()
cursor.execute(stmt)
rows = cursor.fetchall()
# Convert to one long string
result_string = '\n'.join([str(row) for row in rows])
cursor.close()
return result_string
This executes the query and returns the results as a string.
The reflection step
Here’s where it gets interesting – the function that reviews the SQL and results, and potentially generates improved SQL:
import json
def refine_sql(question, sql_query, output, schema, model):
prompt = f'''
You are a SQL reviewer and refiner.
User asked:
{question}
Original SQL:
{sql_query}
SQL Output:
{output}
Schema:
{schema}
Step 1: Evaluate if the SQL OUTPUT fully answers the user's question.
Step 2: If improvement is needed, provide a refined SQL query for Oracle.
If the original SQL is already correct, return it unchanged.
Return a strict JSON object with two fields:
- "feedback": brief evaluation and suggestions
- "refined_sql": the final SQL to run
Return ONLY the actual JSON document.
Do NOT add any extra characters or delimiters outside of the actual JSON itself.
In particular do NOT include backticks before and after the JSON document.
'''
response = client.chat.completions.create(
model=model,
messages=[{"role": "user", "content": prompt}],
temperature=0,
)
content = response.choices[0].message.content
try:
obj = json.loads(content)
feedback = str(obj.get("feedback", "")).strip()
refined_sql = str(obj.get("refined_sql", sql_query)).strip()
if not refined_sql:
refined_sql = sql_query
except Exception:
# Fallback if model doesn't return valid JSON
feedback = content.strip()
refined_sql = sql_query
return feedback, refined_sql
This is the heart of the reflection pattern. The function:
Shows the LLM the original question, the generated SQL, and the actual results
Asks it to evaluate whether the SQL output really answers the question
If not, asks for an improved query
Returns both the feedback and the refined SQL as JSON
The JSON format makes it easy to parse the response and extract both pieces of information. I had to be fairly pedantic to get gpt-4o to give me just JSON!
Providing a final answer
Finally, let’s write a function to convert the query results into a natural language answer:
def provide_final_answer(question, output, model):
prompt = f'''
You are helpful assistant.
Given a user's question, and the results of a database query
which has been created, evaluated, improved and executed already
in order to get the provided output, you should provide an
answer to the user's question.
User question:
{question}
Query results:
{output}
'''
response = client.chat.completions.create(
model=model,
messages=[{"role": "user", "content": prompt}],
temperature=0,
)
return response.choices[0].message.content.strip()
This takes the final query results and turns them into a friendly, natural language response for the user.
Putting it all together
Now let’s create the main function that orchestrates the entire process:
The original SQL query retrieves the top 10 customers based on income, which may not
fully answer the question of 'top customers' as it could be interpreted in terms of
sales or transactions. To better answer the question, we should consider the total
sales or transactions made by each customer.
Great! The LLM recognized that “top customers” probably means customers who spend the most, not customers who earn the most.
Second attempt (SQL V2):
SELECT C.CUST_ID, C.FIRST_NAME, C.LAST_NAME, C.EMAIL, SUM(S.ACTUAL_PRICE) AS TOTAL_SALES
FROM CUSTOMER C
JOIN CUSTSALES S ON C.CUST_ID = S.CUST_ID
GROUP BY C.CUST_ID, C.FIRST_NAME, C.LAST_NAME, C.EMAIL
ORDER BY TOTAL_SALES DESC
FETCH FIRST 10 ROWS ONLY;
Much better! Now it’s joining with the sales data and calculating total spending per customer.
The top customer is Tsubasa Nakajima with $2,356.05 in total sales, followed by Steffi Bielvenstram with $2,334.73, and so on. These are very different customers from the high-income list we got in the first attempt!
Natural language answer:
Our top customers, based on the provided data, are:
1. Tsubasa Nakajima - Email: [email protected], Total: $2356.05
2. Steffi Bielvenstram - Email: [email protected], Total: $2334.73
3. Guadalupe Zamora - Email: [email protected], Total: $2329.76
...
These customers have the highest total amounts associated with them.
What I learned
This reflection approach really does help. The LLM is pretty good at recognizing when its initial SQL doesn’t quite match the intent of the question – especially when it can see the actual results.
The pattern of generate → execute → reflect → regenerate is more expensive (two LLM calls instead of one for generation, plus one more for the final answer), but the quality improvement is noticeable. For production use, you might want to:
Cache schema information instead of fetching it every time
Add more sophisticated error handling for SQL errors
Consider running both queries in parallel and comparing results
Track which types of questions benefit most from reflection
Use the reflection feedback to build a dataset for fine-tuning
The approach is straightforward to implement and the results speak for themselves – the reflection step caught a subtle but important misinterpretation that would have given technically correct but unhelpful results.
Give it a try with your own database and questions – I think you’ll find the reflection step catches a lot of these subtle misinterpretations that would otherwise lead to valid but wrong answers.
What next? I am going to experiment with some more complex questions, and then compare the performance of a number of different LLMs to see how they go with and without reflection. Stay tuned
]]>
https://redstack.dev/2025/11/12/using-reflection-to-help-llms-write-better-sql/feed/0markxnelsonUsing Multiple Datasources with Spring Boot and Spring Data JPA
https://redstack.dev/2025/10/30/using-multiple-datasources-with-spring-boot-and-spring-data-jpa/
https://redstack.dev/2025/10/30/using-multiple-datasources-with-spring-boot-and-spring-data-jpa/#respond<![CDATA[Mark Nelson]]>Thu, 30 Oct 2025 22:19:45 +0000<![CDATA[Uncategorized]]><![CDATA[cloud]]><![CDATA[data-source]]><![CDATA[Java]]><![CDATA[spring-boot]]><![CDATA[technology]]><![CDATA[tutorial]]><![CDATA[UCP]]>http://redstack.dev/?p=4228<![CDATA[Hi everyone! Today I want to show you how to configure multiple datasources in a Spring Boot application using Spring Data JPA and the Oracle Spring Boot Starter for Universal Connection Pool (UCP). This is a pattern you’ll need when … Continue reading →]]><![CDATA[
Hi everyone! Today I want to show you how to configure multiple datasources in a Spring Boot application using Spring Data JPA and the Oracle Spring Boot Starter for Universal Connection Pool (UCP).
This is a pattern you’ll need when you have a single application that needs to connect to multiple databases. Maybe you have different domains in separate databases, or you’re working with legacy systems, or you need to separate read and write operations across different database instances. Whatever the reason, Spring Boot makes this pretty straightforward once you understand the configuration pattern.
For this example, we’re going to build a simple application that manages two separate domains:
Customers – stored in one database
Products – stored in a different database
Each domain will have its own datasource, entity manager, and transaction manager. We’ll use Spring Data JPA repositories to interact with each database, and we’ll show how to use both datasources in a REST controller.
I am assuming you have a database with two users called customer and product and some tables. Here’s the SQL to set that up:
$ sqlplus sys/Welcome12345@localhost:1521/FREEPDB1 as sysdba
alter session set container=freepdb1;
create user customer identified by Welcome12345;
create user product identified by Welcome12345;
grant connect, resource, unlimited tablespace to customer;
grant connect, resource, unlimited tablespace to product;
commit;
$ sqlplus customer/Welcome12345@localhost:1521/FREEPDB1
create table customer (id number, name varchar2(64));
insert into customer (id, name) values (1, 'mark');
commit;
$ sqlplus product/Welcome12345@localhost:1521/FREEPDB1
create table product (id number, name varchar2(64));
insert into product (id, name) values (1, 'coffee machine');
commit;
Step 1: Dependencies
Let’s start with the Maven dependencies. Here’s what you’ll need in your pom.xml:
<dependencies>
<!-- Spring Boot Starter Web for REST endpoints -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<!-- Spring Boot Starter Data JPA -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<!-- Oracle Spring Boot Starter for UCP -->
<dependency>
<groupId>com.oracle.database.spring</groupId>
<artifactId>oracle-spring-boot-starter-ucp</artifactId>
<version>25.3.0</version>
</dependency>
</dependencies>
The key dependency here is the oracle-spring-boot-starter-ucp, which provides autoconfiguration for Oracle’s Universal Connection Pool. UCP is Oracle’s high-performance connection pool implementation that provides features like connection affinity, Fast Connection Failover, and Runtime Connection Load Balancing.
Step 2: Configure the Datasources in application.yaml
Now let’s configure our two datasources in the application.yaml file. We’ll define connection properties for both the customer and product databases:
Notice that we’re using custom property prefixes (spring.datasource.customer and product) instead of the default spring.datasource. This is because Spring Boot’s autoconfiguration will only create a single datasource by default. When you need multiple datasources, you need to create them manually and use custom configuration properties.
In this example, both datasources happen to point to the same database server but use different schemas (users). In a real-world scenario, these would typically point to completely different database instances.
Step 3: Configure the Customer Datasource
Now we need to create the configuration classes that will set up our datasources, entity managers, and transaction managers. Let’s start with the customer datasource.
Create a new package called customer and add a configuration class called CustomerDataSourceConfig.java:
package com.example.demo.customer;
import javax.sql.DataSource;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.boot.autoconfigure.jdbc.DataSourceProperties;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.boot.orm.jpa.EntityManagerFactoryBuilder;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Primary;
import org.springframework.data.jpa.repository.config.EnableJpaRepositories;
import org.springframework.orm.jpa.JpaTransactionManager;
import org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean;
import org.springframework.transaction.PlatformTransactionManager;
import org.springframework.transaction.annotation.EnableTransactionManagement;
@Configuration
@EnableTransactionManagement
@EnableJpaRepositories(entityManagerFactoryRef = "customerEntityManagerFactory", transactionManagerRef = "customerTransactionManager", basePackages = {
"com.example.demo.customer" })
public class CustomerDataSourceConfig {
@Bean(name = "customerProperties")
@ConfigurationProperties("spring.datasource.customer")
public DataSourceProperties customerDataSourceProperties() {
return new DataSourceProperties();
}
/**
* Creates and configures the customer DataSource.
*
* @param properties the customer datasource properties
* @return configured DataSource instance
*/
@Primary
@Bean(name = "customerDataSource")
@ConfigurationProperties(prefix = "spring.datasource.customer")
public DataSource customerDataSource(@Qualifier("customerProperties") DataSourceProperties properties) {
return properties.initializeDataSourceBuilder().build();
}
/**
* Reads customer JPA properties from application.yaml.
*
* @return Map of JPA properties
*/
@Bean(name = "customerJpaProperties")
@ConfigurationProperties("spring.jpa.customer.properties")
public java.util.Map<String, String> customerJpaProperties() {
return new java.util.HashMap<>();
}
/**
* Creates and configures the customer EntityManagerFactory.
*
* @param builder the EntityManagerFactoryBuilder
* @param dataSource the customer datasource
* @param jpaProperties the JPA properties from application.yaml
* @return configured LocalContainerEntityManagerFactoryBean
*/
@Primary
@Bean(name = "customerEntityManagerFactory")
public LocalContainerEntityManagerFactoryBean customerEntityManagerFactory(EntityManagerFactoryBuilder builder,
@Qualifier("customerDataSource") DataSource dataSource,
@Qualifier("customerJpaProperties") java.util.Map<String, String> jpaProperties) {
return builder.dataSource(dataSource)
.packages("com.example.demo.customer")
.persistenceUnit("customers")
.properties(jpaProperties)
.build();
}
@Bean
@ConfigurationProperties("spring.jpa.customer")
public PlatformTransactionManager customerTransactionManager(
@Qualifier("customerEntityManagerFactory") LocalContainerEntityManagerFactoryBean entityManagerFactory) {
return new JpaTransactionManager(entityManagerFactory.getObject());
}
}
Let’s break down what’s happening here:
@EnableJpaRepositories – This tells Spring Data JPA where to find the repositories for this datasource. We specify the base package (com.example.multidatasource.customer), and we reference the entity manager factory and transaction manager beans by name.
@Primary – We mark the customer datasource as the primary one. This means it will be used by default when autowiring a datasource, entity manager, or transaction manager without a @Qualifier. You must have exactly one primary datasource when using multiple datasources.
customerDataSource() – This creates the datasource bean using Spring Boot’s DataSourceBuilder. The @ConfigurationProperties annotation binds the properties from our application.yaml (with the customer.datasource prefix) to the datasource configuration.
customerEntityManagerFactory() – This creates the JPA entity manager factory, which is responsible for creating entity managers. We configure it to scan for entities in the customer package and set up Hibernate properties.
customerTransactionManager() – This creates the transaction manager for the customer datasource. The transaction manager handles transaction boundaries and ensures ACID properties.
Step 4: Configure the Product Datasource
Now let’s create the configuration for the product datasource. Create a new package called product and add ProductDataSourceConfig.java:
package com.example.demo.product;
import javax.sql.DataSource;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.boot.autoconfigure.jdbc.DataSourceProperties;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.boot.orm.jpa.EntityManagerFactoryBuilder;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.jpa.repository.config.EnableJpaRepositories;
import org.springframework.orm.jpa.JpaTransactionManager;
import org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean;
import org.springframework.transaction.PlatformTransactionManager;
import org.springframework.transaction.annotation.EnableTransactionManagement;
@Configuration
@EnableTransactionManagement
@EnableJpaRepositories(entityManagerFactoryRef = "productEntityManagerFactory", transactionManagerRef = "productTransactionManager", basePackages = {
"com.example.demo.product" })
public class ProductDataSourceConfig {
@Bean(name = "productProperties")
@ConfigurationProperties("spring.datasource.product")
public DataSourceProperties productDataSourceProperties() {
return new DataSourceProperties();
}
@Bean(name = "productDataSource")
@ConfigurationProperties(prefix = "spring.datasource.product")
public DataSource productDataSource(@Qualifier("productProperties") DataSourceProperties properties) {
return properties.initializeDataSourceBuilder().build();
}
/**
* Reads product JPA properties from application.yaml.
*
* @return Map of JPA properties
*/
@Bean(name = "productJpaProperties")
@ConfigurationProperties("spring.jpa.product.properties")
public java.util.Map<String, String> productJpaProperties() {
return new java.util.HashMap<>();
}
/**
* Creates and configures the product EntityManagerFactory.
*
* @param builder the EntityManagerFactoryBuilder
* @param dataSource the product datasource
* @param jpaProperties the JPA properties from application.yaml
* @return configured LocalContainerEntityManagerFactoryBean
*/
@Bean(name = "productEntityManagerFactory")
public LocalContainerEntityManagerFactoryBean productEntityManagerFactory(@Autowired EntityManagerFactoryBuilder builder,
@Qualifier("productDataSource") DataSource dataSource,
@Qualifier("productJpaProperties") java.util.Map<String, String> jpaProperties) {
return builder.dataSource(dataSource)
.packages("com.example.demo.product")
.persistenceUnit("products")
.properties(jpaProperties)
.build();
}
@Bean
@ConfigurationProperties("spring.jpa.product")
public PlatformTransactionManager productTransactionManager(
@Qualifier("productEntityManagerFactory") LocalContainerEntityManagerFactoryBean entityManagerFactory) {
return new JpaTransactionManager(entityManagerFactory.getObject());
}
}
The product configuration is almost identical to the customer configuration, with a few key differences:
No @Primary annotations – Since we already designated the customer datasource as primary, we don’t mark the product beans as primary.
Different package – The @EnableJpaRepositories points to the product package, and the entity manager factory scans the product package for entities.
Different bean names – All the beans have different names (productDataSource, productEntityManagerFactory, productTransactionManager) to avoid conflicts.
Step 5: Create the Domain Models
Now let’s create the JPA entities for each datasource. First, in the customer package, create Customer.java:
package com.example.demo.customer;
import jakarta.persistence.Entity;
import jakarta.persistence.Id;
@Entity
public class Customer {
@Id
public int id;
public String name;
public Customer() {
this.id = 0;
this.name = "";
}
}
And in the product package, create Product.java:
package com.example.demo.product;
import jakarta.persistence.Entity;
import jakarta.persistence.Id;
@Entity
public class Product {
@Id
public int id;
public String name;
public Product() {
this.id = 0;
this.name = "";
}
}
Step 6: Create the Repositories
Now let’s create Spring Data JPA repositories for each entity. In the customer package, create CustomerRepository.java:
Finally, let’s create a REST controller that demonstrates how to use both datasources. Create a controller package and add CustomerController.java:
package com.example.demo.controllers;
import java.util.List;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;
import com.example.demo.customer.Customer;
import com.example.demo.customer.CustomerRepository;
@RestController
public class CustomerController {
final CustomerRepository customerRepository;
public CustomerController(CustomerRepository customerRepository) {
this.customerRepository = customerRepository;
}
@GetMapping("/customers")
public List<Customer> getCustomers() {
return customerRepository.findAll();
}
}
A few important things to note about the controller:
Transaction Managers – When you have multiple datasources, you need to explicitly specify which transaction manager to use. Notice the @Transactional("customerTransactionManager") and @Transactional("productTransactionManager") annotations on the write operations. If you don’t specify a transaction manager, Spring will use the primary one (customer) by default.
Repository Autowiring – The repositories are autowired normally. Spring knows which datasource each repository uses based on the package they’re in, which we configured in our datasource configuration classes.
Cross-datasource Operations – The initializeData() method demonstrates working with both datasources in a single method. However, note that these operations are not in a distributed transaction – if one fails, the other won’t automatically roll back. If you need distributed transactions across multiple databases, you would need to use JTA (Java Transaction API).
Let’s also create ProductController.java:
package com.example.demo.controllers;
import java.util.List;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;
import com.example.demo.product.Product;
import com.example.demo.product.ProductRepository;
@RestController
public class ProductController {
final ProductRepository productRepository;
public ProductController(ProductRepository productRepository) {
this.productRepository = productRepository;
}
@GetMapping("/products")
public List<Product> getProducts() {
return productRepository.findAll();
}
}
Testing the Application
Now you can run your application! Make sure you have two Oracle database users created (customer and product), or adjust the configuration to point to your specific databases.
Start the application:
mvn spring-boot:run
Then you can test it with some curl commands:
# Get all customers
$ curl http://localhost:8080/customers
[{"id":1,"name":"mark"}]
# Get all products
$ curl http://localhost:8080/products
[{"id":1,"name":"coffee machine"}]
Wrapping Up
And there you have it! We’ve successfully configured a Spring Boot application with multiple datasources using Spring Data JPA and Oracle’s Universal Connection Pool. The key points to remember are:
Custom configuration properties – Use custom prefixes for each datasource in your application.yaml
Manual configuration – Create configuration classes for each datasource with beans for the datasource, entity manager factory, and transaction manager
Primary datasource – Designate one datasource as primary using @Primary
Package organization – Keep entities and repositories for each datasource in separate packages
Explicit transaction managers – Specify which transaction manager to use for write operations with @Transactional
This pattern works great when you need to connect to multiple databases, whether they’re different types of databases or different instances of the same database. Oracle’s Universal Connection Pool provides excellent performance and reliability for your database connections.
]]>https://redstack.dev/2025/10/30/using-multiple-datasources-with-spring-boot-and-spring-data-jpa/feed/0markxnelsonCustom vector distance functions in Oracle (using JavaScript)
https://redstack.dev/2025/10/20/custom-vector-distance-functions-and-hybrid-vector-search-in-oracle-using-javascript/
https://redstack.dev/2025/10/20/custom-vector-distance-functions-and-hybrid-vector-search-in-oracle-using-javascript/#respond<![CDATA[Mark Nelson]]>Mon, 20 Oct 2025 16:00:42 +0000<![CDATA[Uncategorized]]><![CDATA[ai]]><![CDATA[artificial-intelligence]]><![CDATA[jaccard]]><![CDATA[javascript]]><![CDATA[llm]]><![CDATA[mle]]><![CDATA[oracle]]><![CDATA[rag]]><![CDATA[technology]]><![CDATA[vector-search]]>http://redstack.dev/?p=4188<![CDATA[In case you missed it, Oracle Database 26ai was announced last week at Oracle AI World, with a heap of new AI features and capabilities like hybrid vector search, MCP server support, acceleration with NVIDIA and much more – check … Continue reading →]]><![CDATA[
Of course, I wanted to check it out, and I was thinking about what to do first. I remembered this LinkedIn post from Anders Swanson about implementing custom vector distance functions in Oracle using the new JavaScript capabilities, and I thought that could be something interesting to do, so I am going to show you how to implement and use Jaccard distance for dense vector embeddings for similarity searches.
Now, this is a slightly contrived example, because I am more interested in showing you how to add a custom metric than in the actual metric itself. I chose Jaccard because the actual implementation is pretty compact.
Now, Oracle does already include Jaccard distance, but only for the BINARY data type, which is where Jaccard is mostly used. But there is a version that can be used for continuous/real-valued vectors as well (this version is for dense vectors), and that is what we will implement.
This is the formula for Jaccard similarity for continuous vectors. This is also known as the Tanimoto coefficient. It is the intersection divided by the union (or zero if the union is zero):
To get the Jaccard distance, we just subtract the Jaccard similarity from one.
Before we start, let’s look at a two-dimensional example to get a feel for how it works. Of course, the real vectors created by embedding models have many more dimensions, but it is hard for us to visualize more than two or three dimensions without also introducing techniques like dimensionality reduction and projection).
Here we have two vectors A [5 8] and B [7 4]:
The union is calculated using the max values, as you see in the formular above, so in this example it is 7×8, as shown by the area shaded pink. The intersection is calculated with the min values, so it is 5×4, as shown by the green area.
So in this example, the Jaccard similarity is (7×8) / (5×4) = 56 / 20 = 0.6
And so the Jaccard distance is 1 – 0.6 = 0.4
Ok, now that we have some intuition about how this distance metric works, let’s implement it in Oracle.
Start up an Oracle Database
First, let’s fire up Oracle Database Free 26ai in a container:
This will pull the latest image, which at the time of writing is 26ai (version tag 23.26.0.0). You can check the logs to see when startup is complete, you’ll see a message “DATABASE IS READY TO USE”:
docker logs -f db26ai
Let’s create a user called vector with the necessary privileges:
docker exec -i db26ai sqlplus sys/Welcome12345@localhost:1521/FREEPDB1 as sysdba <<EOF
alter session set container=FREEPDB1;
create user vector identified by vector;
grant connect, resource, unlimited tablespace, create credential, create procedure, create mle, create any index to vector;
commit;
EOF
Now you can connect with your favorite client. I am going to use Oracle SQL Developer for VS Code. See the link for install instructions.
Implement the custom distance function
Open up an SQL Worksheet, or run this in your tool of choice:
create or replace function jaccard_distance("a" vector, "b" vector)
return binary_double
deterministic parallel_enable
as mle language javascript pure {{
// check the vectors are the same length
if (a.length !== b.length) {
throw new Error('Vectors must have same length');
}
let intersection = 0;
let union = 0;
for (let i = 0; i < a.length; i++) {
intersection += Math.min(a[i], b[i]);
union += Math.max(a[i], b[i]);
}
// handle the case where union is zero (all-zero vectors)
if (union === 0) {
return 0;
}
const similarity = intersection / union;
return 1 - similarity;
}};
/
Let’s walk throught this. First, you see that we are creating a function called jaccard_distance which accepts two vectors (a and b) as input and returns a binary_double. This function sugnature is required for distance functions. Next we must include the deterministric keyword and we have also included the parallel_enable keyword so that this function could be used with HNSW vector indexes. For the purposes of this example, you can just ignore those or assume that they are just needed as part of the function signature.
Next you see that we mention this will be an MLE function written in JavaScript, and we added the pure keyword to let the database know that this is a pure function – meaning it has no side effects, it will not update any data, and its output will always be the same for a given set of inputs (i.e., that it is memoizable).
Then we have the actual implementation of the function. First, we check that the vectors have the same length (i.e., the same number of dimensions) which is required for this calculation to be applicable.
Then we work through the vectors and collect the minimums and maximums to calculate the intersection and the union.
Next, we check if the union is zero, and if so we return zero to handle that special case. And finally, we calculate the similarity, then subtract it from one to get the distance and return that.
Using our custom distance function
Great, so let’s test our function. We can start by creating a table t1 to store some vectors:
create table t1 (
id number,
v vector(2, float32)
);
And let’s add a couple of vectors, including the one we saw in the example above [5 8]:
Create a new Jupyter Notebook using File > New File… then choose Jupyter Notebook as the type of file, and save your new file at jaccard.ipynb.
First, we need to set up the Python runtime environment. Click on the Select Kernel button (its on the top right). Select Python Environment then Create Python Environment. Select the option to create a Venv (Virtual Environment) and choose your Python interpreter. I recommend using at least Python 3.11. This will download all the necessary files and will take a minute or two.
Now, let’s install the libraries we will need – enter this into a cell and run it:
%pip install oracledb sentence-transformers
Now, connect to the same Oracle database (again, enter this into a cell and run it):
Let’s create a table to hold 1024 dimension vectors that we will create with the mxbai-embed-large-v1 embedding model. Back in your SQL Worksheet, run this statement:
create table t2 (
id number,
v vector(1024, float32)
);
Ok, now let’s create some embeddings. Back in your notebook, create a new cell with this code:
import oracledb
from sentence_transformers import SentenceTransformer
# Initialize the embedding model
print("Loading embedding model...")
model = SentenceTransformer('mixedbread-ai/mxbai-embed-large-v1')
# Your text data
texts = [
"The quick brown fox jumps over the lazy dog",
"Machine learning is a subset of artificial intelligence",
"Oracle Database 23ai supports vector embeddings",
"Python is a popular programming language",
"Embeddings capture semantic meaning of text"
]
# Generate embeddings
print("Generating embeddings...")
embeddings = model.encode(texts)
Let’s discuss what we are doing in this code. First, we are going to download the embedding model usign the SentenceTransformer. Then, we define a few simple texts that we can use for this example and use the embedding model to create the vector embeddings for those texts.
If you want to see what the embeddings look like, just enter “embeddings” in a cell and run it. In the output you can see the shape is 5 (rows) with 1024 dimensions and the type is float32.
Now, let’s insert the embeddings into our new table t2:
import array
cursor = connection.cursor()
# Insert data
for i in range(len(embeddings)):
cursor.execute("""
INSERT INTO t2 (id, v)
VALUES (:1, :2)
""", [i, array.array('f', embeddings[i].tolist())])
connection.commit()
print(f"Successfully inserted {len(texts)} records")
You can take a look at the vectors using the simple query (back in your SQL Worksheet):
select * from t2
Which will show you something like this:
And, now let’s try our distance function with these vectors. Back in your notebook, run this cell. I’ve included the built-in cosine distance as well, just for comparison purposes:
query = array.array('f', model.encode("Antarctica is the driest continent").tolist())
cursor = connection.cursor()
cursor.execute("""
select
id,
jaccard_distance(v, :1),
vector_distance(v, :2, cosine)
from t2
order by id
""", [query, query])
for row in cursor:
print(f"id: {row[0]} has jaccard distance: {row[1]} and cosine distance: {row[2]}")
cursor.close()
Your output will look something like this:
id: 0 has jaccard distance: 2.0163214889484307 and cosine distance: 0.7859490566650003
id: 1 has jaccard distance: 2.0118706751976925 and cosine distance: 0.6952327173906239
id: 2 has jaccard distance: 2.0152858933816775 and cosine distance: 0.717824211314015
id: 3 has jaccard distance: 2.0216149035530537 and cosine distance: 0.6455277387099003
id: 4 has jaccard distance: 2.0132575761281766 and cosine distance: 0.6962028121886988
Well, there you go! We implemented and used a custom vector distance function. Enjoy!
]]>https://redstack.dev/2025/10/20/custom-vector-distance-functions-and-hybrid-vector-search-in-oracle-using-javascript/feed/0markxnelsonLet’s make a simple MCP tool for Oracle AI Vector Search
https://redstack.dev/2025/10/08/lets-make-a-simple-mcp-tool-for-oracle-ai-vector-search/
https://redstack.dev/2025/10/08/lets-make-a-simple-mcp-tool-for-oracle-ai-vector-search/#comments<![CDATA[Mark Nelson]]>Wed, 08 Oct 2025 20:02:10 +0000<![CDATA[Uncategorized]]><![CDATA[ai]]><![CDATA[artificial-intelligence]]><![CDATA[llm]]><![CDATA[mcp]]><![CDATA[rag]]><![CDATA[technology]]>http://redstack.dev/?p=4164<![CDATA[In this earlier post, we created a vector store in our Oracle Database 23ai and populated it with some content from Moby Dick. Since MCP is very popular these days, I thought it might be interesting to look how to … Continue reading →]]><![CDATA[
In this earlier post, we created a vector store in our Oracle Database 23ai and populated it with some content from Moby Dick. Since MCP is very popular these days, I thought it might be interesting to look how to create a very simple MCP server to expose the similarity search as and MCP tool.
Let’s jump right into it. First we are going to need a requirements.txt file with a list of the dependencies we need:
Note: I used Python 3.12 and a virtual environment.
Now let’s create a file called mcp_server.py and get to work! Let’s start with some imports:
import asyncio
import oracledb
from mcp.server import Server
from mcp.types import Tool, TextContent
from pydantic import BaseModel
from langchain_community.vectorstores import OracleVS
from langchain_community.vectorstores.utils import DistanceStrategy
from langchain_huggingface import HuggingFaceEmbeddings
And we are going to need the details of the database so we can connect to that, so let’s define some variables to hold those parameters:
# Database connection parameters for Oracle Vector Store
DB_USERNAME = "vector"
DB_PASSWORD = "vector"
DB_DSN = "localhost:1521/FREEPDB1"
TABLE_NAME = "moby_dick_500_30"
Note: These match the database and vector store used in the previous post.
Let’s create a function to connect to the database, and set up the embedding model and the vector store.
# Global variables for database connection and embedding model
# These are initialized once on server startup for efficiency
embedding_model = None # HuggingFace sentence transformer model
vector_store = None # LangChain OracleVS wrapper for vector operations
connection = None # Oracle database connection
def initialize_db():
"""
Initialize database connection and vector store
This function is called once at server startup to establish:
1. Connection to Oracle database
2. HuggingFace embedding model (sentence-transformers/all-mpnet-base-v2)
3. LangChain OracleVS wrapper for vector similarity operations
The embedding model converts text queries into 768-dimensional vectors
that can be compared against pre-computed embeddings in the database.
"""
global embedding_model, vector_store, connection
# Connect to Oracle database using oracledb driver
connection = oracledb.connect(
user=DB_USERNAME,
password=DB_PASSWORD,
dsn=DB_DSN
)
# Initialize HuggingFace embeddings model
# This model converts text to 768-dimensional vectors
# Same model used to create the original embeddings in the database
embedding_model = HuggingFaceEmbeddings(
model_name="sentence-transformers/all-mpnet-base-v2"
)
# Initialize vector store wrapper
# OracleVS provides convenient interface for vector similarity operations
vector_store = OracleVS(
client=connection,
table_name=TABLE_NAME,
embedding_function=embedding_model,
# Use cosine similarity for comparison
distance_strategy=DistanceStrategy.COSINE,
)
Again, note that I am using the same embedding model that we used to create the vectors in this vector store. This is important because we need to create embedding vectors for the queries using the same model, so that similarity comparisons will be valid. It’s also important that we use the right distance strategy – for text data, cosine is generally agreed to be the best option. For performance reasons, if we had created a vector index, we’d want to use the same algorithm so the index would be used when performing the search. Oracle will default to doing an “exact search” if there is no index and the algorithm does not match.
Now, let’s add a function to perform a query in our Moby Dick vector store, we’ll include a top-k parameter so the caller can specify how many results they want:
def search_moby_dick(query: str, k: int = 4) -> list[dict]:
"""
Perform vector similarity search on the moby_dick_500_30 table
This function:
1. Converts the query text to a vector using the embedding model
2. Searches the database for the k most similar text chunks
3. Returns results ranked by similarity (cosine distance)
Args:
query: The search query text (natural language)
k: Number of results to return (default: 4)
Returns:
List of dictionaries containing rank, content, and metadata for each result
"""
if vector_store is None:
raise RuntimeError("Vector store not initialized")
# Perform similarity search
# The query is automatically embedded and compared against database vectors
docs = vector_store.similarity_search(query, k=k)
# Format results into structured dictionaries
results = []
for i, doc in enumerate(docs):
results.append({
"rank": i + 1, # 1-indexed ranking by similarity
"content": doc.page_content, # The actual text chunk
"metadata": doc.metadata # Headers from the original HTML structure
})
return results
As you can see, this function returns a dictionary containing the rank, the content (chunk) and the metadata.
Ok, now let’s turn this into an MCP server! First let’s create the server instance:
# Create MCP server instance
# The server name "moby-dick-search" identifies this server in MCP client connections
app = Server("moby-dick-search")
Now we want to provide a list-tools method so that MCP clients can find out what kinds of tools this server provides. We are just going to have our search tool, so let’s define that:
@app.list_tools()
async def list_tools() -> list[Tool]:
"""
MCP protocol handler: returns list of available tools
Called by MCP clients to discover what capabilities this server provides.
This server exposes a single tool: search_moby_dick
Returns:
List of Tool objects with names, descriptions, and input schemas
"""
return [
Tool(
name="search_moby_dick",
description="Search the Moby Dick text using vector similarity. Returns relevant passages based on semantic similarity to the query.",
inputSchema={
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The search query text"
},
"k": {
"type": "integer",
"description": "Number of results to return (default: 4)",
"default": 4
}
},
"required": ["query"]
}
)
]
And now, the part we’ve all been waiting for – let’s define the actual search tool (and a class to hold the arguments)!
class SearchArgs(BaseModel):
"""
Arguments for the vector search tool
Attributes:
query: The natural language search query
k: Number of most similar results to return (default: 4)
"""
query: str
k: int = 4
@app.call_tool()
async def call_tool(name: str, arguments: dict) -> list[TextContent]:
"""
MCP protocol handler: executes tool calls
Called when an MCP client wants to use one of the server's tools.
Validates the tool name, parses arguments, performs the search,
and returns formatted results.
Args:
name: Name of the tool to call
arguments: Dictionary of tool arguments
Returns:
List of TextContent objects containing the formatted search results
"""
# Validate tool name
if name != "search_moby_dick":
raise ValueError(f"Unknown tool: {name}")
# Parse and validate arguments using Pydantic model
args = SearchArgs(**arguments)
# Perform the vector similarity search
results = search_moby_dick(args.query, args.k)
# Format response as human-readable text
response_text = f"Found {len(results)} results for query: '{args.query}'\n\n"
for result in results:
response_text += f"--- Result {result['rank']} ---\n"
response_text += f"Metadata: {result['metadata']}\n"
response_text += f"Content: {result['content']}\n\n"
# Return as MCP TextContent type
return [TextContent(type="text", text=response_text)]
That was not too bad. Finally, let’s set up a main function to start up everything and handle the requests:
async def main():
"""
Main entry point for the MCP server
This function:
1. Initializes the database connection and embedding model
2. Sets up stdio transport for MCP communication
3. Runs the server event loop to handle requests
The server communicates via stdio (stdin/stdout), which allows
it to be easily spawned by MCP clients as a subprocess.
"""
# Initialize database connection and models
initialize_db()
# Import stdio server transport
from mcp.server.stdio import stdio_server
# Run the server using stdio transport
# The server reads MCP protocol messages from stdin and writes responses to stdout
async with stdio_server() as (read_stream, write_stream):
await app.run(
read_stream,
write_stream,
app.create_initialization_options()
)
if __name__ == "__main__":
asyncio.run(main())
Ok, that’s it! We can run this with the command:
python mcp_server.py
Now, to test it, we’re groing to need a client! MCP Inspector is the logical place to start, you can get it from here, or (assuming you have node installed) by just running this command:
That’s going to start up a UI that looks like this:
Click on the connect button, and you should see an updated screen in a few seconds that looks like this:
Go ahead and click on List Tools and you will see our Search Moby Dick Tool show up – click on it to try it out.
You should see some results like this:
There you go, it works great! And that’s a super simple, basic MCP server and tool! Enjoy.
]]>https://redstack.dev/2025/10/08/lets-make-a-simple-mcp-tool-for-oracle-ai-vector-search/feed/1markxnelsonExploring securing vector similarity searches with Real Application Security
https://redstack.dev/2025/10/08/exploring-securing-vector-similarity-searches-with-real-application-security/
https://redstack.dev/2025/10/08/exploring-securing-vector-similarity-searches-with-real-application-security/#respond<![CDATA[Mark Nelson]]>Wed, 08 Oct 2025 17:04:40 +0000<![CDATA[Uncategorized]]><![CDATA[ai]]><![CDATA[artificial-intelligence]]><![CDATA[rag]]><![CDATA[ras]]><![CDATA[security]]><![CDATA[technology]]>http://redstack.dev/?p=4018<![CDATA[In this post, I want to explore how you can use Real Application Security to provide access controls for vectors in a vector store in Oracle Database 23ai. I’m going to use the vector store we created in the last … Continue reading →]]><![CDATA[
In this post, I want to explore how you can use Real Application Security to provide access controls for vectors in a vector store in Oracle Database 23ai.
I’m going to use the vector store we created in the last post as an example. If you want to follow along, you should follow that one first to create and populate your vector store, then come back here.
You should have a vector store table called MOBY_DICK_500_30 that you created in that previous post. You can connect to Oracle using SQLcl or SQL*Plus or whatever tool you prefer and check the structure of that table:
SQL> describe moby_dick_500_30
Name Null? Type
____________ ___________ ____________________________
ID NOT NULL RAW(16 BYTE)
TEXT CLOB
METADATA JSON
EMBEDDING VECTOR(768,FLOAT32,DENSE)
Let’s observe that that metadata column contains the document structure information from the loaders that we used. If we filter for Chapter 12, we can see there are 13 vectors associated with that chapter:
SQL> select metadata from moby_dick_500_30 where metadata like '%CHAPTER 12.%';
METADATA
__________________________________________________________________________________
{"Header 1":"MOBY-DICK; or, THE WHALE.","Header 2":"CHAPTER 12. Biographical."}
{"Header 1":"MOBY-DICK; or, THE WHALE.","Header 2":"CHAPTER 12. Biographical."}
{"Header 1":"MOBY-DICK; or, THE WHALE.","Header 2":"CHAPTER 12. Biographical."}
{"Header 1":"MOBY-DICK; or, THE WHALE.","Header 2":"CHAPTER 12. Biographical."}
{"Header 1":"MOBY-DICK; or, THE WHALE.","Header 2":"CHAPTER 12. Biographical."}
{"Header 1":"MOBY-DICK; or, THE WHALE.","Header 2":"CHAPTER 12. Biographical."}
{"Header 1":"MOBY-DICK; or, THE WHALE.","Header 2":"CHAPTER 12. Biographical."}
{"Header 1":"MOBY-DICK; or, THE WHALE.","Header 2":"CHAPTER 12. Biographical."}
{"Header 1":"MOBY-DICK; or, THE WHALE.","Header 2":"CHAPTER 12. Biographical."}
{"Header 1":"MOBY-DICK; or, THE WHALE.","Header 2":"CHAPTER 12. Biographical."}
{"Header 1":"MOBY-DICK; or, THE WHALE.","Header 2":"CHAPTER 12. Biographical."}
{"Header 1":"MOBY-DICK; or, THE WHALE.","Header 2":"CHAPTER 12. Biographical."}
{"Header 1":"MOBY-DICK; or, THE WHALE.","Header 2":"CHAPTER 12. Biographical."}
13 rows selected.
We are going to use this metadata to filter access to the vectors.
Set up permissions
Let’s start by setting up the necessary permissions. You will need to run this as the SYS user:
alter session set container=freepdb1;
grant create session, xs_session_admin to vector;
exec sys.xs_admin_util.grant_system_privilege('provision', 'vector', sys.xs_admin_util.ptype_db);
grant create role to vector;
exec sys.xs_admin_util.grant_system_privilege('admin_sec_policy', 'vector', sys.xs_admin_util.ptype_db);
exec sys.xs_admin_util.grant_system_privilege('ADMIN_ANY_SEC_POLICY', 'vector', sys.xs_admin_util.ptype_db);
Great! Now let’s set up Real Application Security. We will run the rest of these commands as the VECTOR user.
So there you have it! We can define policies to easily control access to vectors. In this example we used the metadata to create the filtering rules, of course you could create whatever kind of rules you need.
This allows you to have a vector store which can be easily filtered for different users (or roles), essentially creating a virtual private vector store. You might want to allow ‘customer-support’ role access a certain subset of vectors for example, but your ‘supervisor’ role to access a larger set (or all) of the vectors.
What’s great about this, is that the security is enforced in the database itself. When an AI Assistant, chatbot, MCP client, etc., performs a vector search, they will only ever be able to get back results from the vectors that the user is allowed to see. The database will never send vectors to users which they are not allowed to see. So you don’t have to worry about trusting the LLM not to make a mistake and give out the wrong data, because it will literally never see the data in the first place.
]]>https://redstack.dev/2025/10/08/exploring-securing-vector-similarity-searches-with-real-application-security/feed/0markxnelsonBasic Retrieval Augmented Generation with Oracle Vector Store in LangChain
https://redstack.dev/2025/05/23/basic-retrieval-augmented-generation-with-oracle-vector-store-in-langchain/
https://redstack.dev/2025/05/23/basic-retrieval-augmented-generation-with-oracle-vector-store-in-langchain/#comments<![CDATA[Mark Nelson]]>Fri, 23 May 2025 23:29:15 +0000<![CDATA[Uncategorized]]><![CDATA[ai]]><![CDATA[artificial-intelligence]]><![CDATA[langchain]]><![CDATA[llm]]><![CDATA[python]]><![CDATA[rag]]><![CDATA[technology]]>http://redstack.dev/?p=4082<![CDATA[In this earlier post, we learned how to create a Vector Store in Oracle from LangChain, the hugely popular Python library for working with Generative AI and Large Language Models. We populated it with a little data and then performed … Continue reading →]]><![CDATA[
In this earlier post, we learned how to create a Vector Store in Oracle from LangChain, the hugely popular Python library for working with Generative AI and Large Language Models. We populated it with a little data and then performed some simple vector similarity searches.
In this post, let’s expand on that to implement basic Retrieval Augmented Generation!
First, let’s talk about some concepts – if you alread know this, feel free to jump ahead!
Generative AI – This is a type of Artificial Intelligence (AI) that uses a specialized form of machine learning model, called a Large Language Model (or “LLM”), to create (“generate”) new content based on a prompt from a user. It works by looking at the “tokens” that it received in the input (the “prompt”) and then figuring out what is the most probable next token in the sequence. What is a token? Well, it may be a word, or a part of a word, but we use the word “token” because it could also be part of an audio file or an image, since some of these models support other types of data, not just text. Note that it only generates one token at a time – you have to “run” the model again for every subsequent token.
Training – these models are “trained” by exposing them to very large amounts of data. Usually the data is publicly available information, collected from the Internet and/or other repositories. Training a model is very expensive, both in terms of time, and in terms of the cost of running the specialized GPU hardware needed to perform the training. You may see a model described as having “70 billion parameters” or something like that. Training is basically a process of tuning the probabilities of each of these parameters based on the new input.
When a model sees a prompt like “My Husky is a very good” it will use those probabilities to determine that comes next. In this example, “dog” would have a very high probability of being the next “token”.
Hyper-parameters – models also have extra parameters that control how they behave. These so-called “hyper-parameters” include things like “temperature” which controls how creative the model will be, “top-K” which controls how many options the model will consider when choose the next token, and various kinds of “frequency penalties” that will cause the model to be more or less likely to reuse/repeat tokens. Of course these are just a few examples.
Knowledge cut-off – an important property of LLMs is that they have not been exposed to any information that was created after their training ended. So a model training in 2023 would not know who won an election held in 2024 for example.
Hallucination – LLMs tend to “make up an answer” if they do not “know” the answer. Now, obviously they don’t really know anything in the same sense that we know things, they are rworking with probabilities. But if we can anthropomorphize them for a moment, they tend to “want” to be helpful, and they are very likely to offer you a very confident but completely incorrect answer if they do not have the necessary information to answer a question.
Now, of course a lot of people want to use this exciting new technology to implement solutions to help their customers or users. ChatBots is a prime example of something that is frequently implemented using Generative AI these days. But, of course no new technology is a silver bullet, and they all come with their own challenges and issues. Let’s consider some common challenges when attempting to implement a project with Generative AI:
Which model to use? There are many models available, and they have all beeen trained differently. Some are specialized models that are trained to perform a particular task, for example summarizing a document. Other models are general purpose and can perform different tasks. Some models understand only one language (like English) and others understand many. Some models only understand text, others only images, others video, and others still are multi-modal and understand various different types of data. Models also have different licensing requirements. Some models are provided as a service, like a utility, where you typically pay some very small amount per request. Other models are able to be self-hosted, or run on your own hardware.
Privacy. Very often the data that you need for your project is non-public data, and very often you do not want to share that data with a third-party organization for privacy reasons, or even regulatory reasons, depending on your industry. People are also very wary about a third-party organization using their non-public data to train future models.
How to “tune” to models hyper-parameters. As we discussed earlier, the hyper-parameters control how the model behaves. The settings of these parameters can have a significant impact on the quality of the results that are produced.
Dealing with knowledge cut-off. Giving the model access to information that is newer than when it was trained is also a key challenge. Probably the most obvious way to do this is to continue to model’s training by exposing it to this newer information. This is known as “fine-tuning”. The key challenge is that is an extremely expensive enterprise, requiring specialized GPU hardware and very highly skilled people to plan and run the training.
Enter “Retrieval Augmented Generation,” (or “RAG”) first introduced by Patrick Lewis et al, from Meta, in the 2020 paper “Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks“. RAG is a technique that gives LLMs access to non-public information and/or information created after their training and is orders of magnitude less expensive than fine-tuning.
The essence of RAG is to provide the LLM with the information that it needs to answer a question by “stuffing” that information into the LLM’s “context window” or “prompt” along with the actual question. It’s a bit like an open-book test. Imagine you get a question like this:
How much does a checking account from (some bank) cost?
And let’s assume that information is not readily available on the Internet. How would you come up with the answer? You likely could not.
But if the question was more like this:
How much does a checking acount from (some bank) cost?
To answer this question, consult these DOCUMENTS:
(here there would be the actual content of those documents that provide the information necessary to answer that question)
Much easier right? That’s basically what RAG is. It provides the most relevant information to the LLM so that it can answer the question.
So now, the obvious questions are – where does it get this information from, and how does it know which parts are the most relevant?
This is where our Vector Store comes in!
The set of information, the non-public data that we want the LLM to use, we call that the “corpus”. Very often the corpus will be so large that there is no reasonable way for us to just give the LLM the whole thing. Now, as I am writing this in May 2025, there are models that have very large “context windows” and could be given a large amount of data. Llama 4 was just released, as I write this, and has a context window size of 10 million tokens! So you could in fact give it a large amount of information. But models that were released as recently as six or twelve months ago have much smaller context windows.
So the approach we use is to take the corpus, and split it up into small pieces, called “chunks” and we create an “embedding vector” for each of these chunks. This vector is basically an n-dimensional numerical representation of the semantic meaning of the chunk. Chunks with similar meanings will have similiar (i.e., close) vectors. Chunks with different meanings will have vectors that are futher apart.
Now, visualizing an n-dimensional vector is challenging. But if n=2, its a lot easier. So let’s do that! Remember, in real models, n is much more likely to be in the thousands or tens of thousands, but the concepts are the same. Consider the diagram below:
In this diagram, we have only two dimensions, the vertical dimension is “largeness” – how large (or small) the thing is. The horizontal dimension is “dog-ness” – how much the thing is (or is not) a dog.
Notice that both the Saint Bernard and the German Shephard (I hope I got those breeds right!) are large dogs. So the vector for both of them are high on both axes, and their vectors are very close together, because in this two-dimensional world, they are indeed very, very similar. The wolf is also large, but it is not actually a dog. Dogs are related to (descended from) wolves, so it is somewhat dog-like, but its vector is quite a distance away from the actual large dogs.
Now, look at the tennis ball! It is not large, and it is not a dog, so it’s vector is almost in the complete opposite direction to the large dogs.
Now, consider the question “Is a Husky a large dog?”
What we do in RAG, is we take that question, and turn that into a vector, using the exact same “embedding model” that we used to create those vectors we just looked at above, and then we see what other vectors are close to it.
Notice that the resulting vector, represented by the red dot, ended up very close to those two large dogs! So if we did a similarity search, that is, if we found the closest vectors to our question vector, what we would get back is the vectors for the Saint Bernard and the German Shephard.
Here’s a diagram of the RAG process:
So we take the question from the user, we turn it into a vector, we find the closest vectors to that in our corprus, and then we get the actual content that those vectors was created from and give that information to the LLM to allow it to answer the question. Remember, in real life there are many more dimensions, and they are not going to be some concept that we can neatly label, like “largeness”. The actual dimensions are things that are learned by the model over many billions of iterations of weight adjustments as it was exposed to vast amounts of data. The closest (non-mathematical) analogy I can think of is Isaac Asimov’s “positronic brain” in his Robots, Empire and Foundation series which he described as learning through countless small adjustments of uncountable numbers of weights..
Wow! That was a lot of theory! Let’s get back to some code, please!
In the previous post, we populated our vector store with just three very small quotes from Moby Dick. Now, let’s use the entire text!
Let’s create a new notebook. If you followed along in the previous post, you can just create a new notebook in the same project and choose the same environment/kernel. If not, create a new project, then create a notebook, for example basic-rag.ipynb and create a kernel:
Click on the Select Kernel button (its on the top right). Select Python Environment then Create Python Environment. Select the option to create a Venv (Virtual Environment) and choose your Python interpreter. I recommend using at least Python 3.11. This will download all the necessary files and will take a minute or two.
If you created a new environment, install the necessary packages by creating and running a cell with this content. Note that you can run this even if you have a pre-existing environment, it won’t do any harm:
Now, create and run this cell to set your OpenAI API key:
import getpass
import os
if not os.environ.get("OPENAI_API_KEY"):
os.environ["OPENAI_API_KEY"] = getpass.getpass("Enter API key for OpenAI: ")
from langchain.chat_models import init_chat_model
model = init_chat_model("gpt-4o-mini", model_provider="openai")
model.invoke("Hello, world!")
Paste your key in when prompted (see the previous post if you need to know how to get one) and confirm you got the expected response from the model.
Now, let’s connect to the database by creating and running this cell (this assumes that you started the database container and created the user as described in the previous post!)
Ok, now we are ready to read that document and create our vector embeddings. But how? In the previous post we manually created some excerpts, but now we want to read the whole document.
Enter Document Loaders! Take a look at that page, LangChain has hundreds of different document loaders that understand all kinds of documetn formats.
Let’s try the basic web loader, create and run this cell to install it:
Now create and run this cell to initialize the document loader:
from langchain_community.document_loaders import WebBaseLoader
loader = WebBaseLoader("https://www.gutenberg.org/cache/epub/2701/pg2701-images.html")
Now load the documents by running this cell:
docs = loader.load()
If you’d like, take a look at the result, by running this cell:
docs[0]
Well, that is just one big document, that is not so helpful, we want to split that document up into smaller chunks so we can create vectors for each smaller part. Let’s use a document splitter instead.
Install a splitter by running this cell:
%pip install -qU langchain-text-splitters
Note: Check out this page for more information about the available splitters. We are going to use the HTMLHeaderTextSplitter. Run this cell:
You’ll see a long list of Documents and if you look carefully, you can see that it has maintained the structure information.
Great! That’s a lot better.
Now, let’s suppose we wanted to constrain the size of the chunks. Some of those might be too big, we might want to split them even further. We can do that with a RecursiveCharacterTextSplitter.
Let’s say we wanted chunks no bigger than 500 characters, with an overlap of 30. Now this might not be a good idea, but just for the sake of the example, let’s do it by running this cell:
from langchain_community.vectorstores import oraclevs
from langchain_community.vectorstores.oraclevs import OracleVS
from langchain_community.vectorstores.utils import DistanceStrategy
from langchain_core.documents import Document
from langchain_huggingface import HuggingFaceEmbeddings
embedding_model = HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2")
vector_store = OracleVS.from_documents(
splits,
embedding_model,
client=connection,
table_name="moby_dick_500_30",
distance_strategy=DistanceStrategy.COSINE,
)
We are using the same model as we did in the previous post, but now we are passing in our splits that we just created – our 500 character long chunks of our larger chunks created from the HTML document respecting the document structure. And we called our vector store table moby_dick_500_30 to make it a little easier to remember what we put in there.
After that cell has finished (it might take a few minutes), you can take a look to see what is in the vector store by running this command in your terminal window:
docker exec -i db23ai sqlplus vector/vector@localhost:1521/FREEPDB1 <<EOF
select table_name from user_tables;
describe documents_cosine;
column id format a20;
column text format a30;
column metadata format a30;
column embedding format a30;
set linesize 150;
select * from moby_dick_500_30
fetch first 3 rows only;
EOF
You should get something similar to this:
Let’s try our searches again, run this cell:
query = 'Where is Rokovoko?'
print(vector_store.similarity_search(query, 1))
query2 = 'What does Ahab like to do after breakfast?'
print(vector_store.similarity_search(query2, 1))
You can change that 1 to a larger number now, since you have many more vectors, to see what you get!
Ok, now we have all the pieces we need and we are ready to implement the RAG!
The most basic way to implement RAG is to use a “retriever” – we can grab one from our vector store like this:
retriever = vector_store.as_retriever()
Try it out by asking a question:
docs = retriever.invoke("Where is Rokovoko?")
docs
You’ll get something like this:
Nearly there!
Now, we want to give the LLM a good prompt to tell it what to do, and include the retrieved documents. Let’s use a standard prompt for now:
You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.
Question: (question goes here)
Context: (context goes here)
Answer:
Ok, now to put it all together. Now, in real life we’d probably want to use LangGraph at this point, and we’d want to think about including things like memory, ranking the results from the vector search, citations/references (“grounding” the answer), and streaming the output. But that’s all for another post! For now, let’s just do the most basic implementation:
AIMessage(content='Rokovoko is an island located far away to the West and South, as mentioned in relation to Queequeg, a native of the island. It is not found on any maps, suggesting it may be fictional.', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 46, 'prompt_tokens': 402, 'total_tokens': 448, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_54eb4bd693', 'id': 'chatcmpl-BaW3g2omlCY0l6LwDkC9Ub8Ls3V88', 'service_tier': 'default', 'finish_reason': 'stop', 'logprobs': None}, id='run--86f14be2-9c8f-43c9-ae89-259db1c640bd-0', usage_metadata={'input_tokens': 402, 'output_tokens': 46, 'total_tokens': 448, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}})
That’s a pretty good answer!
Well, there you go, we covered a lot of ground in this post, but that’s just a very basic RAG. Stay tuned to learn about implementing a more realistic RAG in the next post!
]]>https://redstack.dev/2025/05/23/basic-retrieval-augmented-generation-with-oracle-vector-store-in-langchain/feed/2markxnelsonGetting started with Oracle Vector Store support in LangChain
https://redstack.dev/2025/05/16/getting-started-with-oracle-vector-store-support-in-langchain/
https://redstack.dev/2025/05/16/getting-started-with-oracle-vector-store-support-in-langchain/#comments<![CDATA[Mark Nelson]]>Fri, 16 May 2025 22:00:53 +0000<![CDATA[Uncategorized]]><![CDATA[ai]]><![CDATA[artificial-intelligence]]><![CDATA[langchain]]><![CDATA[llm]]><![CDATA[oracle]]><![CDATA[rag]]><![CDATA[semantic-search]]><![CDATA[vector-store]]>http://redstack.dev/?p=4041<![CDATA[In this post, I would like to show you the basics of how to use the Oracle Vector Store support in LangChain. I am using Visual Studio Code with the Python and Jupyter extensions from Microsoft installed. I will show … Continue reading →]]><![CDATA[
In this post, I would like to show you the basics of how to use the Oracle Vector Store support in LangChain. I am using Visual Studio Code with the Python and Jupyter extensions from Microsoft installed. I will show more detailed usage in future posts!
Prefer to watch a video? Check it out here:
To get started, create a new project in Visual Studio code, and then create a new Jupyter Notebook using File > New File… then choose Jupyter Notebook as the type of file, and save your new file at getting_started.ipynb,
First, we need to set up the Python runtime environment. Click on the Select Kernel button (its on the top right). Select Python Environment then Create Python Environment. Select the option to create a Venv (Virtual Environment) and choose your Python interpreter. I recommend using at least Python 3.11. This will download all the necessary files and will take a minute or two.
In this example, we will use OpenAI for our chat model. You’ll need to get an API Key from OpenAI, which you can do by logging into https://platform.openai.com/settings/organization/api-keys and creating a key. Of course you could use a different model, including a self-hosted model so that you don’t have to send your data outside your organization. I’ll cover that in future posts, stay tuned!
In the first cell, check that the type is Python and enter this code:
%pip install -qU "langchain[openai]"
Press Shift+Enter or click on the Run icon to run this code block. This will also take a minute or so to install the LangChain library for OpenAI.
Now create a second cell and paste in this code:
import getpass
import os
if not os.environ.get("OPENAI_API_KEY"):
os.environ["OPENAI_API_KEY"] = getpass.getpass("Enter API key for OpenAI: ")
from langchain.chat_models import init_chat_model
model = init_chat_model("gpt-4o-mini", model_provider="openai")
Run this block, and when it prompts you for your key, paste in the key, it will start with something like sk-proj and have a long string of mostly letters and numbers after that. This will save your key in the environment so that you don’t have to keep entering it each time.
Now, we are ready to talk to the LLM! Let’s try a simple prompt. Create a new cell and enter this code:
model.invoke("Hello, world!")
Run this cell and observe the output. It should look something like this:
Great, now we are ready to connect to a vector store. If you don’t already have one, start up an instance of Oracle Database 23ai in a container on your machine. Run this command in a terminal window (not the notebook)
This will start up an Oracle Database 23ai Free instance in a container. It will have a PDB called FREEPDB1 and the password for PDBADMIN (and SYS and SYSTEM) will be Welcome12345.
Now, run the following command to create an Oracle user with appropriate permissions to create a vector store:
docker exec -i db23ai sqlplus sys/Welcome12345@localhost:1521/FREEPDB1 as sysdba <<EOF
alter session set container=FREEPDB1;
create user vector identified by vector;
grant connect, resource, unlimited tablespace, create credential, create procedure, create any index to vector;
commit;
EOF
Let’s connect to the database! First we’ll isntall the oracledb library. Create a new cell and enter this code:
Now, import the things we will need by creating a new call with this code and running it:
from langchain_community.vectorstores import oraclevs
from langchain_community.vectorstores.oraclevs import OracleVS
from langchain_community.vectorstores.utils import DistanceStrategy
from langchain_core.documents import Document
from langchain_huggingface import HuggingFaceEmbeddings
We are going to need some documents to load into the vector store, so let’s define some to use for an example. In real life, you’d probably want use your own non-public documents to load a vector store if you were building a chatbot or using retrieval augmented generation. Create and run a new call with this code:
documents_json_list = [
{
"id": "moby_dick_2701_P1",
"text": "Queequeg was a native of Rokovoko, an island far away to the West and South. It is not down in any map; true places never are.",
"link": "https://www.gutenberg.org/cache/epub/2701/pg2701-images.html#link2HCH0012",
},
{
"id": "moby_dick_2701_P2",
"text": "It was not a great while after the affair of the pipe, that one morning shortly after breakfast, Ahab, as was his wont, ascended the cabin-gangway to the deck. There most sea-captains usually walk at that hour, as country gentlemen, after the same meal, take a few turns in the garden.",
"link": "https://www.gutenberg.org/cache/epub/2701/pg2701-images.html#link2HCH0036",
},
{
"id": "moby_dick_2701_P3",
"text": "Now, from the South and West the Pequod was drawing nigh to Formosa and the Bashee Isles, between which lies one of the tropical outlets from the China waters into the Pacific. And so Starbuck found Ahab with a general chart of the oriental archipelagoes spread before him; and another separate one representing the long eastern coasts of the Japanese islands—Niphon, Matsmai, and Sikoke. ",
"link": "https://www.gutenberg.org/cache/epub/2701/pg2701-images.html#link2HCH0109",
},
]
Now, let’s load them into a LangChain documents list with some metadata. Create and run a cell with this code:
Let’s have a look in the database and see what was created. Run this code in your terminal:
docker exec -i db23ai sqlplus vector/vector@localhost:1521/FREEPDB1 <<EOF
select table_name from user_tables;
describe documents_cosine;
column id format a20;
column text format a30;
column metadata format a30;
column embedding format a30;
set linesize 150;
select * from documents_cosine;
EOF
You should see output similar to this:
SQL>
TABLE_NAME
--------------------------------------------------------------------------------
DOCUMENTS_COSINE
SQL> Name Null? Type
----------------------------------------- -------- ----------------------------
ID NOT NULL RAW(16)
TEXT CLOB
METADATA JSON
EMBEDDING VECTOR(768, FLOAT32)
SQL> SQL> SQL> SQL> SQL> SQL>
ID TEXT METADATA EMBEDDING
-------------------- ------------------------------ ------------------------------ ------------------------------
957B602A0B55C487 Now, from the South and West t {"id":"moby_dick_2701_P3","lin [9.29364376E-003,-5.70030287E-
he Pequod was drawing nigh to k":"https://www.gutenberg.org/ 002,-4.62282933E-002,-1.599499
Formosa and the Bash cache/epub/2701/pg27 58E-002,
A8A71597D56432FD Queequeg was a native of Rokov {"id":"moby_dick_2701_P1","lin [4.28722538E-002,-8.80071707E-
oko, an island far away to the k":"https://www.gutenberg.org/ 003,3.56001826E-003,6.765306E-
West and South. It cache/epub/2701/pg27 003,
E7675836CF07A695 It was not a great while after {"id":"moby_dick_2701_P2","lin [1.06763924E-002,3.91203648E-0
the affair of the pipe, that k":"https://www.gutenberg.org/ 04,-1.01576066E-002,-3.5316135
one morning shortly cache/epub/2701/pg27 7E-002,
Now, let’s do a vector similarity search. Create and run a cell with this code:
query = 'Where is Rokovoko?'
print(vector_store.similarity_search(query, 1))
query2 = 'What does Ahab like to do after breakfast?'
print(vector_store.similarity_search(query2, 1))
This will find the one (1) nearest match in each case. You should get an answer like this:
[Document(metadata={'id': 'moby_dick_2701_P1', 'link': 'https://www.gutenberg.org/cache/epub/2701/pg2701-images.html#link2HCH0012'}, page_content='Queequeg was a native of Rokovoko, an island far away to the West and South. It is not down in any map; true places never are.')]
[Document(metadata={'id': 'moby_dick_2701_P2', 'link': 'https://www.gutenberg.org/cache/epub/2701/pg2701-images.html#link2HCH0036'}, page_content='It was not a great while after the affair of the pipe, that one morning shortly after breakfast, Ahab, as was his wont, ascended the cabin-gangway to the deck. There most sea-captains usually walk at that hour, as country gentlemen, after the same meal, take a few turns in the garden.')]
Well, there you go, that’s the most basic example of creating a vector store, loading some documents into it and doing a simple similarity search. Stay tuned to learn about more advanced features!
]]>https://redstack.dev/2025/05/16/getting-started-with-oracle-vector-store-support-in-langchain/feed/1markxnelsonRunning Oracle Autonomous Database in a container
https://redstack.dev/2025/04/25/running-oracle-autonomous-database-in-a-container/
https://redstack.dev/2025/04/25/running-oracle-autonomous-database-in-a-container/#respond<![CDATA[Mark Nelson]]>Fri, 25 Apr 2025 14:51:04 +0000<![CDATA[Uncategorized]]><![CDATA[ADB]]><![CDATA[autonomous]]><![CDATA[container]]><![CDATA[database]]><![CDATA[oracle]]><![CDATA[oracle-database]]><![CDATA[ORDS]]><![CDATA[sql]]>http://redstack.dev/?p=4020<![CDATA[Did you know that you can easily run Oracle Autonomous Database in a container on your local machine? This is a great for development. It’s totally free, and you don’t even need to authenticate to pull the image. It also … Continue reading →]]><![CDATA[
Did you know that you can easily run Oracle Autonomous Database in a container on your local machine? This is a great for development. It’s totally free, and you don’t even need to authenticate to pull the image. It also includes Oracle REST Data Services, APEX, Database Actions and the MongoDB API, so you get a nice built-in browser-based UI to work with your database. The free version does have a 20GB limit on database size, but for development purposes, that’s fine.
To start up a database, you can use this command, just replace the “xxxxxx”s with proper passwords. Note that the volume is needed so data will be persisted across container restarts, if you leave that out, you’ll get a new empty database every time you restart the container:
Here’s what the Database Actions login page looks like, you can log in with the user “admin” and the password you specified:
When you sign in, you will see the launchpad, from where you can access various tools:
For example, you could open the SQL tool and try executing a statement:
You may also want to connect to your database using other tools like Oracle SQL Developer (which is a Visual Studio Code extension) or SQLcl (which is a command line tool), or from a program. To do this, you will probably want to grab the wallet, read on!
Connecting the the database
If you want to use mTLS, you can get the wallet by copying it from the image using this command, just provide the desired destination path in the last argument:
Note that the address will be ‘localhost’ in the tnsnames.ora, so you will need to update that if necessary.
To use the wallet, set your TNS_ADMIN environment variable:
export TNS_ADMIN=/path/to/wallet
The following TNS aliases are provided, for mTLS:
myatp_medium
myatp_high
myatp_low
myatp_tp
myatp_tpurgent
And for TLS:
myatp_medium_tls
myatp_high_tls
myatp_low_tls
myatp_tp_tls
myatp_tpurgent_tls
Here’s an example of connecting with SQLcl:
$ TNS_ADMIN=/path/to/wallet sql admin/xxxxxx@myatp_high
SQLcl: Release 24.1 Production on Fri Apr 25 10:41:46 2025
Copyright (c) 1982, 2025, Oracle. All rights reserved.
Last Successful login time: Fri Apr 25 2025 10:41:48 -04:00
Connected to:
Oracle Database 23ai Enterprise Edition Release 23.0.0.0.0 - Production
Version 23.6.0.24.11
SQL> select sysdate;
SYSDATE
____________
25-APR-25
Here’s an example of connecting from SQL Developer. When you create the connection, just choose the location of the wallet (tnsnames.ora file) and it will let you select the TNS name to connect to:
Enjoy!
]]>https://redstack.dev/2025/04/25/running-oracle-autonomous-database-in-a-container/feed/0markxnelsonHow to read the content of a JMS message using PL/SQL
https://redstack.dev/2023/05/26/how-to-read-the-content-of-a-jms-message-using-pl-sql/
https://redstack.dev/2023/05/26/how-to-read-the-content-of-a-jms-message-using-pl-sql/#respond<![CDATA[Mark Nelson]]>Fri, 26 May 2023 17:50:57 +0000<![CDATA[Uncategorized]]><![CDATA[JMS]]><![CDATA[oracle]]><![CDATA[springboot]]><![CDATA[TEQ]]><![CDATA[txeventq]]>http://redstack.wordpress.com/?p=4000<![CDATA[This is just a short post – but all the details are in this post from Rob Van Wijk. Today I wanted to read the contents of a JMS Text Message sitting in a queue. I wrote a Spring Boot … Continue reading →]]><![CDATA[
This is just a short post – but all the details are in this post from Rob Van Wijk.
Today I wanted to read the contents of a JMS Text Message sitting in a queue. I wrote a Spring Boot micrsoervice that sends a message, and I have not written the one that recieves and processes the message yet, so I wanted to look at the message on the queue to check it was correct.
So I went and did a good old “select user_data from deposits_qt” and stared at the answer: “Object”. Hmmm, not what I wanted.
After a quick bit of Googling, I found Rob’s post which told me exactly what I needed to know. Yay! Thanks Rob!
Then I changed my query to this:
select qt.user_data.text_vc from account.deposits_qt qt;
And I got exactly what I needed:
{"accountId":2,"amount":200}
Fantastic! Thnaks a lot Rob!
]]>https://redstack.dev/2023/05/26/how-to-read-the-content-of-a-jms-message-using-pl-sql/feed/0markxnelsonVote for my session at VMWare Explore (Spring track)!
https://redstack.dev/2023/05/16/vote-for-my-session-at-vmware-explore-spring-track/
https://redstack.dev/2023/05/16/vote-for-my-session-at-vmware-explore-spring-track/#respond<![CDATA[Mark Nelson]]>Tue, 16 May 2023 13:41:05 +0000<![CDATA[Uncategorized]]><![CDATA[Spring]]><![CDATA[springboot]]><![CDATA[vmwareexplore]]>http://redstack.wordpress.com/?p=3995<![CDATA[Vote for my People’s Choice Session “Experiences and lessons learnt building a multi-cloud #SpringBoot backend (ID 2002)” to be featured at #VMwareExplore 2023 Las Vegas! Place your vote by May 26: https://lnkd.in/eiRi-YF7 Register for VMWare Explore here. Learn more about Oracle Backend for Spring Boot … Continue reading →]]><![CDATA[
Vote for my People’s Choice Session “Experiences and lessons learnt building a multi-cloud #SpringBoot backend (ID 2002)” to be featured at #VMwareExplore 2023 Las Vegas! Place your vote by May 26: https://lnkd.in/eiRi-YF7
Register for VMWare Explore here. Learn more about Oracle Backend for Spring Boot here.
]]>https://redstack.dev/2023/05/16/vote-for-my-session-at-vmware-explore-spring-track/feed/0markxnelsonStart up an Oracle Database in Kubernetes with Oracle REST Data Services and Database Actions in no time at all!
https://redstack.dev/2023/05/01/start-up-an-oracle-database-in-kubernetes-with-oracle-rest-data-services-and-database-actions-in-no-time-at-all/
https://redstack.dev/2023/05/01/start-up-an-oracle-database-in-kubernetes-with-oracle-rest-data-services-and-database-actions-in-no-time-at-all/#respond<![CDATA[Mark Nelson]]>Mon, 01 May 2023 19:33:11 +0000<![CDATA[Uncategorized]]><![CDATA[database]]><![CDATA[database actions]]><![CDATA[kubernetes]]><![CDATA[operator]]><![CDATA[oracle]]><![CDATA[ORDS]]><![CDATA[REST]]><![CDATA[sql-developer]]>http://redstack.wordpress.com/?p=3962<![CDATA[Hi everyone! Today I want to show you how easy it is to get an instance of Oracle up and running in Kubernetes, with Oracle REST Data Services and Database Actions using the Oracle Database Operator for Kubernetes Let’s assume … Continue reading →]]><![CDATA[
That will start up pretty quickly too, and you can check with this command:
kubectl -n oracle-database-operator-system get pods
Let’s create a Single Instance Database. The Oracle Database Operator will let you create other types of databases too, including sharded and multitenant databases, and to manage cloud database instances like Autonomous Database and Database Cloud Service. But today, I’m going to stick with a simple single instance.
Here’s the Kubernetes YAML file to describe the database we want, I called this sidb.yaml:
If you have not before, head over to Oracle Container Registry and go to the Database group, and accept the license agreement for the Enterprise option. You’ll also want to create a Kubernetes secret with your credentials so it can pull the image:
You will want to change the storageClass to match your cluster. I am using Oracle Container Engine for Kuberentes in this example, so I used the “oci-bv” storage class. If you are using a different flavor of Kubernetes you should check what storage classes are available and use one of them.
This YAML describes a databse with the SID ORCL1 and a PDB called orclpdb1. It will get the password for sys, pdbadmin, etc., from a Kubernetes secret – so let’s create that:
Now we can create the database by applying that YAML file to our cluster:
kubectl apply -f sidb.yaml
It will take few minutes to start up fully – it has to pull the image (which took 3m30s on my cluster, for the “enterprise” image which is the biggest one), create the database instance the first time (mine took 8m), and apply any patches that are required (just over 1m for me). Subsequent startups will be much faster of course (I stopped it by scaling to zero replicas, then started it again by scaling back to one replica and it reached ready/healthy status in about 90s). For reference, my cluster had two nodes each with one OCPU and 16 GB of RAM. You can check on the progress with this command:
kubectl get singleinstancedatabases -o wide -w
As the database starts up, you will see the connection string and other fields populate in the output.
Now, let’s add Oracle REST Data Services. Here’s a Kubernetes YAML file that describes what we want, I called this ords.yaml:
You can apply that to your cluster with this command:
kubectl apply -f ords.yaml
And we can check on progress with this command:
kubectl get oraclerestdataservice -w
As it becomes ready, you will see the URLs for the Database API REST endpoint and for Database Actions. Mine took about 2m to reach ready/healthy status.
If your nodes are on a private network, the quickest way to access the REST APIs and Database Actions is to use a port forward. You can get the name of the ORDS pod and start a port forwarding session with commands like this:
kubectl get pods
kubectl port-forward pod/ords-sample-g4wc7 8443
Now you can hit the Database API REST endpoint with curl:
On the login page, enter ORCLPDB1 for the PDB Name and mark as the user. Then on the password page enter Welcome12345, and you are good to go!
While we are at it, let’s also get SQLcl access to the database.
Again, we can use port forwarding to access the database from outside the cluster:
kubectl port-forward svc/sidb-sample 1521 &
And then connect from SQLcl (if you have not checked out SQLcl yet, you should, it’s got cool features like command line completion and history):
sql mark/Welcome12345@//localhost:1521/orclpdb1
SQLcl: Release 22.2 Production on Mon May 01 14:32:57 2023
Copyright (c) 1982, 2023, Oracle. All rights reserved.
Last Successful login time: Mon May 01 2023 14:32:56 -04:00
Connected to:
Oracle Database 21c Enterprise Edition Release 21.0.0.0.0 - Production
Version 21.3.0.0.0
SQL> select * from dual;
DUMMY
________
X
SQL>
There you go! That was super quick and easy! Enjoy!
]]>https://redstack.dev/2023/05/01/start-up-an-oracle-database-in-kubernetes-with-oracle-rest-data-services-and-database-actions-in-no-time-at-all/feed/0markxnelsonNew 23c version of Kafka-compatible Java APIs for Transactional Event Queues published
https://redstack.dev/2023/05/01/new-23c-version-of-kafka-compatible-java-apis-for-transactional-event-queues-published/
https://redstack.dev/2023/05/01/new-23c-version-of-kafka-compatible-java-apis-for-transactional-event-queues-published/#respond<![CDATA[Mark Nelson]]>Mon, 01 May 2023 15:59:34 +0000<![CDATA[Uncategorized]]><![CDATA[database]]><![CDATA[Java]]><![CDATA[kafka]]><![CDATA[oracle]]><![CDATA[txeventq]]>http://redstack.wordpress.com/?p=3931<![CDATA[We just published the new 23c version of the Kafka-compatible Java APIs for Transactional Event Queues in Maven Central, and I wanted to show you how to use them! If you are not familiar with these APIs – they basically … Continue reading →]]><![CDATA[
We just published the new 23c version of the Kafka-compatible Java APIs for Transactional Event Queues in Maven Central, and I wanted to show you how to use them! If you are not familiar with these APIs – they basically allow you to use the standard Kafka Java API with Transactaional Event Queues acting as the Kafka broker. The only things that you would need to change are the broker address, and you need to use the Oracle versions of KafkaProducer and KafkaConsumer – other than that, your existing Kafka Java code should just work!
Let’s build a Kafka producer and consumer using the updated Kafka-compatible APIs.
Prepare the database
The first thing we want to do is start up the Oracle 23c Free Database. This is very easy to do in a container using a command like this:
docker run --name free23c -d -p 1521:1521 -e ORACLE_PWD=Welcome12345 container-registry.oracle.com/database/free:latest
This will pull the image and start up the database with a listener on port 1521. It will also create a pluggable database (a database container) called “FREEPDB1” and will set the admin passwords to the password you specified on this command.
You can tail the logs to see when the database is ready to use:
docker logs -f free23c
(look for this message...)
#########################
DATABASE IS READY TO USE!
#########################
Also, grab the IP address of the container, we’ll need that to connect to the database:
To set up the necessary permissions, you’ll need to connect to the database with a client. If you don’t have one already, I’d recommend trying the new SQLcl CLI which you can download here. Start it up and connect to the database like this (note that your IP address and password may be different):
sql sys/Welcome12345@//172.17.0.2:1521/freepdb1 as sysdba
SQLcl: Release 22.2 Production on Tue Apr 11 12:36:24 2023
Copyright (c) 1982, 2023, Oracle. All rights reserved.
Connected to:
Oracle Database 23c Free, Release 23.0.0.0.0 - Developer-Release
Version 23.2.0.0.0
SQL>
Now, run these commands to create a user called “mark” and give it the necessary privileges:
SQL> create user mark identified by Welcome12345;
User MARK created.
SQL> grant resource, connect, unlimited tablespace to mark;
Grant succeeded.
SQL> grant execute on dbms_aq to mark;
Grant succeeded.
SQL> grant execute on dbms_aqadm to mark;
Grant succeeded.
SQL> grant execute on dbms_aqin to mark;
Grant succeeded.
SQL> grant execute on dbms_aqjms_internal to mark;
Grant succeeded.
SQL> grant execute on dbms_teqk to mark;
Grant succeeded.
SQL> grant execute on DBMS_RESOURCE_MANAGER to mark;
Grant succeeded.
SQL> grant select_catalog_role to mark;
Grant succeeded.
SQL> grant select on sys.aq$_queue_shards to mark;
Grant succeeded.
SQL> grant select on user_queue_partition_assignment_table to mark;
Grant succeeded.
SQL> exec dbms_teqk.AQ$_GRANT_PRIV_FOR_REPL('MARK');
PL/SQL procedure successfully completed.
SQL> commit;
Commit complete.
SQL> quit;
begin
-- Creates a topic named TEQ with 5 partitions and 7 days of retention time
dbms_teqk.aq$_create_kafka_topic('TEQ', 5);
-- Creates a Consumer Group CG1 for Topic TEQ
dbms_aqadm.add_subscriber('TEQ', subscriber => sys.aq$_agent('CG1', null, null));
end;
/
You should note that the dbms_teqk package is likely to be renamed in the GA release of Oracle Database 23c, but for the Oracle Database 23c Free – Developer Release you can use it.
Ok, we are ready to start on our Java code!
Create a Java project
Let’s create a Maven POM file (pom.xml) and add the dependencies we need for this application. I’ve also iunclude some profiles to make it easy to run the two main entry points we will create – the producer, and the consumer. Here’s the content for the pom.xml. Note that I have excluded the osdt_core and osdt_cert transitive dependencies, since we are not using a wallet or SSL in this example, so we do not need those libraries:
This is a pretty straightforward POM. I just set the project’s coordinates, declared my one dependency, and then created the two profiles so I can run the code easily.
Next, we are going to need a file called ojdbc.properties in the same directory as the POM with this content:
user=mark
password=Welcome12345
The KafkaProducer and KafkaConsumer will use this to connect to the database.
Create the consumer
Ok, now let’s create our consumer. In a directory called src/main/jaba/com/example, create a new Java file called SimpleConsumerOKafka.java with the following content:
package com.example;
import java.util.Properties;
import java.time.Duration;
import java.util.Arrays;
import org.oracle.okafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.common.header.Header;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerRecord;
public class SimpleConsumerOKafka {
public static void main(String[] args) {
// set the required properties
Properties props = new Properties();
props.put("bootstrap.servers", "172.17.0.2:1521");
props.put("group.id" , "CG1");
props.put("enable.auto.commit","false");
props.put("max.poll.records", 100);
props.put("key.deserializer",
"org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer",
"org.apache.kafka.common.serialization.StringDeserializer");
props.put("oracle.service.name", "freepdb1");
props.put("oracle.net.tns_admin", ".");
props.put("security.protocol","PLAINTEXT");
// create the consumer
Consumer<String , String> consumer = new KafkaConsumer<String, String>(props);
consumer.subscribe(Arrays.asList("TEQ"));
int expectedMsgCnt = 4000;
int msgCnt = 0;
long startTime = 0;
// consume messages
try {
startTime = System.currentTimeMillis();
while(true) {
try {
ConsumerRecords <String, String> records =
consumer.poll(Duration.ofMillis(10_000));
for (ConsumerRecord<String, String> record : records) {
System.out.printf("partition = %d, offset = %d, key = %s, value = %s\n ",
record.partition(), record.offset(), record.key(), record.value());
for(Header h: record.headers()) {
System.out.println("Header: " + h.toString());
}
}
// commit the records we received
if (records != null && records.count() > 0) {
msgCnt += records.count();
System.out.println("Committing records " + records.count());
try {
consumer.commitSync();
} catch(Exception e) {
System.out.println("Exception in commit " + e.getMessage());
continue;
}
// if we got all the messages we expected, then exit
if (msgCnt >= expectedMsgCnt ) {
System.out.println("Received " + msgCnt + ". Expected " +
expectedMsgCnt +". Exiting Now.");
break;
}
} else {
System.out.println("No records fetched. Retrying...");
Thread.sleep(1000);
}
} catch(Exception e) {
System.out.println("Inner Exception " + e.getMessage());
throw e;
}
}
} catch(Exception e) {
System.out.println("Exception from consumer " + e);
e.printStackTrace();
} finally {
long runDuration = System.currentTimeMillis() - startTime;
System.out.println("Application closing Consumer. Run duration " +
runDuration + " ms");
consumer.close();
}
}
}
Let’s walk through this code together.
The first thing we do is prepare the properties for the KafkaConsumer. This is fairly standard, though notice that the bootstrap.servers property contains the address of your database listener:
Then, we add some Oracle-specific properties – oracle.service.name is the name of the service we are connecting to, in our case this is freepdb1; oracle.net.tns_admin needs to point to the directory where we put our ojdbc.properties file; and security.protocol controls whether we are using SSL, or not, as in this case:
With that done, we can create the KafkaConsumer and subscribe to a topic. Note that we use the Oracle version of KafkaConsumer which is basically just a wrapper that understand those extra Oracle-specific properites:
The rest of the code is standard Kafka code that polls for records, prints out any it finds, commits them, and then loops until it has received the number of records it expected and then exits.
Run the consumer
We can build and run the consumer with this command:
mvn exec:exec -P consumer
It will connect to the database and start polling for records, of course there won’t be any yet because we have not created the producer. It should output a message like this about every ten seconds:
No records fetched. Retrying...
Let’s write that producer!
Create the producer
In a directory called src/main/jaba/com/example, create a new Java file called SimpleProducerOKafka.java with the following content:
package com.example;
import org.oracle.okafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata;
import org.apache.kafka.common.header.internals.RecordHeader;
import java.util.Properties;
import java.util.concurrent.Future;
public class SimpleProducerOKafka {
public static void main(String[] args) {
long startTime = 0;
try {
// set the required properties
Properties props = new Properties();
props.put("bootstrap.servers", "172.17.0.2:1521");
props.put("key.serializer",
"org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer",
"org.apache.kafka.common.serialization.StringSerializer");
props.put("batch.size", "5000");
props.put("linger.ms","500");
props.put("oracle.service.name", "freepdb1");
props.put("oracle.net.tns_admin", ".");
props.put("security.protocol","PLAINTEXT");
// create the producer
Producer<String, String> producer = new KafkaProducer<String, String>(props);
Future<RecordMetadata> lastFuture = null;
int msgCnt = 4000;
startTime = System.currentTimeMillis();
// send the messages
for (int i = 0; i < msgCnt; i++) {
RecordHeader rH1 = new RecordHeader("CLIENT_ID", "FIRST_CLIENT".getBytes());
RecordHeader rH2 = new RecordHeader("REPLY_TO", "TOPIC_M5".getBytes());
ProducerRecord<String, String> producerRecord =
new ProducerRecord<String, String>(
"TEQ", String.valueOf(i), "Test message "+ i
);
producerRecord.headers().add(rH1).add(rH2);
lastFuture = producer.send(producerRecord);
}
// wait for the last one to finish
lastFuture.get();
// print summary
long runTime = System.currentTimeMillis() - startTime;
System.out.println("Produced "+ msgCnt +" messages in " + runTime + "ms.");
producer.close();
}
catch(Exception e) {
System.out.println("Caught exception: " + e );
e.printStackTrace();
}
}
}
This code is quite similar to the consumer. We first set up the Kafka properties, including the Oracle-specific ones. Then we create a KafkaProducer, again using the Oracle version which understands those extra properties. After that we just loop and produce the desired number of records.
Make sure your consumer is still running (or restart it) and then build and run the producer with this command:
mvn exec:exec -P producer
When you do this, it will run for a short time and then print a message like this to let you know it is done:
Produced 4000 messages in 1955ms.
Now take a look at the output in the consumer window. You should see quite a lot of output there. Here’s a short snippet from the end:
partition = 0, offset = 23047, key = 3998, value = Test message 3998
Header: RecordHeader(key = CLIENT_ID, value = [70, 73, 82, 83, 84, 95, 67, 76, 73, 69, 78, 84])
Header: RecordHeader(key = REPLY_TO, value = [84, 79, 80, 73, 67, 95, 77, 53])
Committing records 27
Received 4000. Expected 4000. Exiting Now.
Application closing Consumer. Run duration 510201 ms
It prints out a message for each record it finds, including the partition ID, the offset, and the key and value. It them prints out the headers. You will also see commit messages, and at the end it prints out how many records it found and how long it ws running for. I left mine running while I got the producer ready to go, so it shows a fairly long duration But you can run it again and start the producer immediately after it and you will see a much shorter run duration.
Well, there you go! That’s a Kafka producer and consumer using the new updated 23c version of the Kafka-compatible Java API for Transactional Event Queues. Stay tuned for more!
]]>https://redstack.dev/2023/05/01/new-23c-version-of-kafka-compatible-java-apis-for-transactional-event-queues-published/feed/0markxnelsonSpring Boot Starters for Oracle updated
https://redstack.dev/2023/04/24/spring-boot-starters-for-oracle-updated/
https://redstack.dev/2023/04/24/spring-boot-starters-for-oracle-updated/#respond<![CDATA[Mark Nelson]]>Mon, 24 Apr 2023 19:11:00 +0000<![CDATA[Uncategorized]]><![CDATA[oracle]]><![CDATA[Spring]]><![CDATA[springboot]]><![CDATA[starter]]><![CDATA[UCP]]>http://redstack.wordpress.com/?p=3923<![CDATA[Hi everyone. We have just published some updates to the Spring Boot Starters for Oracle Database – we added a starter for UCP (Universal Connection Pool) for Spring 3.0.2. This makes it easy to access the Oracle Database from a … Continue reading →]]><![CDATA[
Hi everyone. We have just published some updates to the Spring Boot Starters for Oracle Database – we added a starter for UCP (Universal Connection Pool) for Spring 3.0.2. This makes it easy to access the Oracle Database from a Spring Boot application – just two steps!
Add a dependency to your Maven POM file (or equivalent)
Here’s the dependency to add:
<dependency>
<groupId>com.oracle.database.spring</groupId>
<artifactId>oracle-spring-boot-starter-ucp</artifactId>
<version>3.0.2</version> <!-- or 2.7.7 for Spring Boot 2.x -->
<type>pom</type>
</dependency>
Add the datasource properties to your Spring Boot application.yaml
Here’s an example, assuming you are also using Spring Data JPA:
We are working to add more Spring Boot Starters for Oracle Database to make it even easier to use, and to make sure we cover all the versions you need! Stay tuned for more updates!
]]>https://redstack.dev/2023/04/24/spring-boot-starters-for-oracle-updated/feed/0markxnelsonOracle Backend for Spring Boot (and Parse Platform) introductory video plublished!
https://redstack.dev/2023/04/20/oracle-backend-for-spring-boot-and-parse-platform-introductory-video-plublished/
https://redstack.dev/2023/04/20/oracle-backend-for-spring-boot-and-parse-platform-introductory-video-plublished/#respond<![CDATA[Mark Nelson]]>Thu, 20 Apr 2023 13:03:20 +0000<![CDATA[Uncategorized]]><![CDATA[backend]]><![CDATA[microservices]]><![CDATA[mobile]]><![CDATA[oracle]]><![CDATA[parse]]><![CDATA[springboot]]>http://redstack.wordpress.com/?p=3920<![CDATA[We just published a short YouTube video that introduces the Oracle Backend for Spring Boot (and Parse Platform) which makes it super easy to develop, run and manage Spring Boot microservices and mobile applications leveraging all the power of Oracle’s … Continue reading →]]><![CDATA[
We just published a short YouTube video that introduces the Oracle Backend for Spring Boot (and Parse Platform) which makes it super easy to develop, run and manage Spring Boot microservices and mobile applications leveraging all the power of Oracle’s converged database.
I hope you can check it out!
]]>https://redstack.dev/2023/04/20/oracle-backend-for-spring-boot-and-parse-platform-introductory-video-plublished/feed/0markxnelsonImplementing the Transactional Outbox pattern using Transactional Event Queues and JMS
https://redstack.dev/2023/04/11/implementing-the-transactional-outbox-pattern-using-transactional-event-queues-and-jms/
https://redstack.dev/2023/04/11/implementing-the-transactional-outbox-pattern-using-transactional-event-queues-and-jms/#respond<![CDATA[Mark Nelson]]>Tue, 11 Apr 2023 20:15:45 +0000<![CDATA[Uncategorized]]><![CDATA[23c]]><![CDATA[23c free]]><![CDATA[JMS]]><![CDATA[TEQ]]><![CDATA[transactional outbox]]><![CDATA[txeventq]]>http://redstack.wordpress.com/?p=3873<![CDATA[Hi, in this post I want to provide an example of how to implement the Transactional Outbox pattern using Transactional Event Queues and JMS with the new Oracle Database 23c Free – Developer Release I mentioned in my last post. … Continue reading →]]><![CDATA[
In the Transactional Outbox pattern, we have a microservice that needs to perform a database operation (like an insert) and send a message, and either both or neither of these need to happen.
Unlike other messaging providers, Transactional Event Queues is built-in to the Oracle Database and has the unique advantage of being able to expose the underlying database transaction to your application. This allows us perform database and messaging operations in the same transaction – which is exactly what we need to implement this pattern.
Prepare the database
The first thing we want to do is start up the Oracle 23c Free Database. This is very easy to do in a container using a command like this:
docker run --name free23c -d -p 1521:1521 -e ORACLE_PWD=Welcome12345 container-registry.oracle.com/database/free:latest
This will pull the image and start up the database with a listener on port 1521. It will also create a pluggable database (a database container) called “FREEPDB1” and will set the admin passwords to the password you specified on this command.
You can tail the logs to see when the database is ready to use:
docker logs -f free23c
(look for this message...)
#########################
DATABASE IS READY TO USE!
#########################
Also, grab the IP address of the container, we’ll need that to connect to the database:
To set up the necessary permissions, you’ll need to connect to the database with a client. If you don’t have one already, I’d recommend trying the new SQLcl CLI which you can download here. Start it up and connect to the database like this (note that your IP address and password may be different):
sql sys/Welcome12345@//172.17.0.2:1521/freepdb1 as sysdba
SQLcl: Release 22.2 Production on Tue Apr 11 12:36:24 2023
Copyright (c) 1982, 2023, Oracle. All rights reserved.
Connected to:
Oracle Database 23c Free, Release 23.0.0.0.0 - Developer-Release
Version 23.2.0.0.0
SQL>
Now, run these commands to create a user called “mark” and give it the necessary privileges:
SQL> create user mark identified by Welcome12345;
User MARK created.
SQL> grant resource , connect, unlimited tablespace to mark;
Grant succeeded.
SQL> grant execute on dbms_aq to mark;
Grant succeeded.
SQL> grant execute on dbms_aqadm to mark;
Grant succeeded.
SQL> grant execute on dbms_aqin to mark;
Grant succeeded.
SQL> grant execute on dbms_aqjms_internal to mark;
Grant succeeded.
SQL> grant execute on dbms_teqk to mark;
Grant succeeded.
SQL> grant execute on DBMS_RESOURCE_MANAGER to mark;
Grant succeeded.
SQL> grant select_catalog_role to mark;
Grant succeeded.
SQL> grant select on sys.aq$_queue_shards to mark;
Grant succeeded.
SQL> grant select on user_queue_partition_assignment_table to mark;
Grant succeeded.
SQL> exec dbms_teqk.AQ$_GRANT_PRIV_FOR_REPL('MARK');
PL/SQL procedure successfully completed.
SQL> commit;
Commit complete.
SQL> quit;
Ok, we are ready to start on our Java code!
Create the Java project
If you have read my posts before, you’ll know I like to use Maven for my Java projects. Let’s create a Maven POM file (pom.xml) and add the dependencies we need for this application. I’ve also iunclude some profiles to make it easy to run the three main entry points we will create – one to create a queue, one to consume messages, and finally the transactional outbox implementation. Here’s the content for the pom.xml:
I won’t go into a heap of detail on this or the first two Java classes, since they are fairly standard and I have talked about very similiar things before in older posts including this one for example. I will go into detail on the transactional outbox implementation though, don’t worry!
Create a Java class to create the queue
We are going to need a queue to put messages on, so let me show you how to do that in Java. Transactional Event Queues support various types of queues and payloads. This example shows how to create a queue that uses the JMS format. Create a file called src/main/com/example/CreateTxEventQ.java with this content:
As you read through this, you’ll see I’ve just hardcoded the username, password and url for convenience in this file (and the others in this post), of course we’d never do that in real life, would we You should also notice that we get a connection, then create the queue table first, set the consumer type (multiple, i.e. pub/sub – so a JMS Topic) and the format (JMS) and the queue itself, and then start it up. Easy, right?
You can run this and create the queue with this command:
mvn exec:exec -Pcreateq
If you want to see the queue in the database, you can log in using that mark user you created and run a query:
While you’re there, let’s also create a table so we have somewhere to perform the database insert operation:
create table customer ( name varchar2(256), email varchar2(256) );
Create the consumer
Let’s create the consumer next. This will be a new Java file in the same directory called Consume.java. Here’s the content:
package com.example;
import java.sql.SQLException;
import javax.jms.JMSException;
import javax.jms.Session;
import javax.jms.Topic;
import javax.jms.TopicConnection;
import javax.jms.TopicConnectionFactory;
import javax.jms.TopicSession;
import oracle.AQ.AQException;
import oracle.jms.AQjmsFactory;
import oracle.jms.AQjmsSession;
import oracle.jms.AQjmsTextMessage;
import oracle.jms.AQjmsTopicSubscriber;
import oracle.ucp.jdbc.PoolDataSource;
import oracle.ucp.jdbc.PoolDataSourceFactory;
public class Consume {
private static String username = "mark";
private static String password = "Welcome12345";
private static String url = "jdbc:oracle:thin:@//172.17.0.2:1521/freepdb1";
private static String topicName = "my_txeventq";
public static void main(String[] args) throws AQException, SQLException, JMSException {
// create a topic session
PoolDataSource ds = PoolDataSourceFactory.getPoolDataSource();
ds.setConnectionFactoryClassName("oracle.jdbc.pool.OracleDataSource");
ds.setURL(url);
ds.setUser(username);
ds.setPassword(password);
// create a JMS topic connection and session
TopicConnectionFactory tcf = AQjmsFactory.getTopicConnectionFactory(ds);
TopicConnection conn = tcf.createTopicConnection();
conn.start();
TopicSession session =
(AQjmsSession) conn.createSession(true, Session.AUTO_ACKNOWLEDGE);
// create a subscriber on the topic
Topic topic = ((AQjmsSession) session).getTopic(username, topicName);
AQjmsTopicSubscriber subscriber =
(AQjmsTopicSubscriber) session.createDurableSubscriber(topic, "my_subscriber");
System.out.println("Waiting for messages...");
// wait forever for messages to arrive and print them out
while (true) {
// the 1_000 is a one second timeout
AQjmsTextMessage message = (AQjmsTextMessage) subscriber.receive(1_000);
if (message != null) {
if (message.getText() != null) {
System.out.println(message.getText());
} else {
System.out.println();
}
}
session.commit();
}
}
}
This one is a fairly standard JMS consumer. It is going to create a subscription to that topic we just created, and wait for messages to arrive, and then just print the content on the screen. Nice and simple. You can run this with this command:
mvn exec:exec -Pconsume
Leave that running so that you see messages as they are produced. Later, when you run the transactional outbox producer, run it in a different window so that you can see what happens in the consumer.
Implement the Transactional Outbox pattern
Yay! The fun part! Here’s the code for this class, which will go into a new Java file in the same dircetory called Publish.java. I’ll walk through this code step by step.
package com.example;
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.SQLException;
import java.sql.Statement;
import javax.jms.JMSException;
import javax.jms.Session;
import javax.jms.Topic;
import javax.jms.TopicConnection;
import javax.jms.TopicConnectionFactory;
import javax.jms.TopicSession;
import oracle.AQ.AQException;
import oracle.jms.AQjmsAgent;
import oracle.jms.AQjmsFactory;
import oracle.jms.AQjmsSession;
import oracle.jms.AQjmsTextMessage;
import oracle.jms.AQjmsTopicPublisher;
import oracle.ucp.jdbc.PoolDataSource;
import oracle.ucp.jdbc.PoolDataSourceFactory;
public class Publish {
private static String username = "mark";
private static String password = "Welcome12345";
private static String url = "jdbc:oracle:thin:@//172.17.0.2:1521/freepdb1";
private static String topicName = "my_txeventq";
public static void main(String[] args) throws JMSException, SQLException {
AQjmsTopicPublisher publisher = null;
TopicSession session = null;
TopicConnection tconn = null;
Connection conn = null;
if (args.length != 3) {
System.err.println("""
You must provide 3 arguments - name, email and failure mode
failure mode:
0 do not fail
1 fail before insert and publish
2 fail after insert, before publish
3 fail after insert and publlsh
""");
}
String name = args[0];
String email = args[1];
int failMode = Integer.parseInt(args[2]);
try {
// create a topic session
PoolDataSource ds = PoolDataSourceFactory.getPoolDataSource();
ds.setConnectionFactoryClassName("oracle.jdbc.pool.OracleDataSource");
ds.setURL(url);
ds.setUser(username);
ds.setPassword(password);
// create a JMS topic connection and session
TopicConnectionFactory tcf = AQjmsFactory.getTopicConnectionFactory(ds);
tconn = tcf.createTopicConnection();
tconn.start();
// open a Transactional session
session = (AQjmsSession) tconn.createSession(true, Session.AUTO_ACKNOWLEDGE);
// also get the JDBC connection
conn = ((AQjmsSession) session).getDBConnection();
conn.setAutoCommit(false);
// if failMode = 1, fail here
if (failMode == 1) throw new Exception();
// first, perform the database operation
PreparedStatement stmt = conn.prepareStatement("insert into customer (name, email) values (?, ?)");
stmt.setString(1, name);
stmt.setString(2, email);
stmt.executeUpdate();
System.out.println("row inserted");
// if failMode = 2, fail here
if (failMode == 2) throw new Exception();
// second, publish the message
Topic topic = ((AQjmsSession) session).getTopic(username, topicName);
publisher = (AQjmsTopicPublisher) session.createPublisher(topic);
AQjmsTextMessage message = (AQjmsTextMessage) session.createTextMessage("new customer with name=" + name + " and email=" + email);
publisher.publish(message, new AQjmsAgent[] { new AQjmsAgent("my_subscription", null) });
System.out.println("message sent");
// if failMode = 3, fail here
if (failMode == 3) throw new Exception();
// we didn't fail - so commit the transaction
if (failMode == 0) session.commit();
} catch (Exception e) {
System.err.println("rolling back");
if (conn != null) conn.rollback();
} finally {
// clean up
if (publisher != null) publisher.close();
if (session != null) session.close();
if (tconn != null) tconn.close();
}
}
}
Ok, so the overall structure of the code is as follows:
First, we are going to start a transaction. Then we will perform two operations – insert a record into the customer table, and send a message on a topic. If eevrything works as expected, we will commit the transaction. Of course, if there is a failure at any point, we will rollback instead. Notice the arrows are labeled with numbers – in the code I have included failure points that correspond to each of these arrows.
At the start of the main method, we are going to check we have the expected arguments — the name and email, and the point at which to fail, i.e., which of those arrows to simulate a failure at. A “0” indicates that no failure should be simulated. So if we run the code with “mark [email protected] 2” as the input, we expect it to fail on the “2” arrow – after it inserted the row in the table and before it sent the message on the topic.
Next we get both a JMS Connection and a JDBC Connection. This is important because it allows us to have a single transaction. Note the following lines:
// open a Transactional session
session = (AQjmsSession) tconn.createSession(true, Session.AUTO_ACKNOWLEDGE);
also get the JDBC connection
conn = ((AQjmsSession) session).getDBConnection();
conn.setAutoCommit(false);
We explicity set the “auto commit” to false on the JDBC connection – we want to control exactly if and when work is commited, we do not want any automatic commits to occur. And on the JMS session we set the “transacted” parameter to true. That’s the first parameter in the createSession() call. This tells it to use the same database transaction.
Next, you will notice that we simulate a failure if the failure point was “1”:
if (failMode == 1) throw new Exception();
If an exception is thrown at this point (or any point), we’d expect to see no new rows in the database and no messages recieved by the consumer. We can check the table with this query:
select * from customer;
And you will see output like this in the consumer window every time a message is produced, so if you do not see that output – no messages:
And next, we have the code to publish a message on the topic:
AQjmsTextMessage message = (AQjmsTextMessage) session.createTextMessage("new customer with name=" + name + " and email=" + email);
publisher.publish(message, new AQjmsAgent[] { new AQjmsAgent("my_subscription", null) });
System.out.println("message sent");
Then you’ll see failure point “3” and then finally the commit!
Next, notice that the catch block contains a rollback on the database connection. You don’t have to rollback the JMS session as well – since they are in the same transaction, this one rollback call is enough to rollback all of the operations.
Run the Transactional Outbox code
Now we’re ready to run the code! First, notice in the POM file we created a profile called “publish” whic contains the following configuration:
The last three arguments are the name, email and the failure point. If you go ahead and run it as is (with failure point 0, meaning no failure) then it should actually get all the way through to the commit. You should see output in the consumer window to let you know the message was produced, and you can check the table in the database to see the new record in there. Run the code like this:
mvn exec:exec -Pproduce
Of course, you’ll see a record in the table and the message.
If you now edit the POM file and change that last argument from 0 to any of the other options and run it again, you’ll notice that it rolls back and you do not get a new record in the table or a message produced on the topic.
How do I know it really worked?
If you’d like to experiment and convince yourself it really is working, try something like commenting out failure point 2 like this:
// if (failMode == 2) throw new Exception();
When you run the code again, you will now see that there is a row in the database that was not rolled back (because the failure never occured and the exception was never thrown) but the message was never sent (becuase the commit was never run due to failMode being 2, not 0).
If you tweak the failure points you can easily convince yourself that it is in fact working just as expected
So there you go, that’s the Transactional Outbox pattern implemented using Transactional Event Queues with Oracle Database 23c Free – that was pretty easy, right? Hope you enjoyed it, and see you soon!
]]>https://redstack.dev/2023/04/11/implementing-the-transactional-outbox-pattern-using-transactional-event-queues-and-jms/feed/0markxnelsonBig news today – Oracle Database 23c Free—Developer Release just released!
https://redstack.dev/2023/04/03/big-news-today-oracle-database-23c-free-developer-release-just-released/
https://redstack.dev/2023/04/03/big-news-today-oracle-database-23c-free-developer-release-just-released/#comments<![CDATA[Mark Nelson]]>Mon, 03 Apr 2023 23:11:12 +0000<![CDATA[Uncategorized]]><![CDATA[23c]]><![CDATA[database]]><![CDATA[duality]]><![CDATA[free]]><![CDATA[javascript stored procedures]]><![CDATA[oracle]]>http://redstack.wordpress.com/?p=3867<![CDATA[Hi everyone! Big news today, just announced at Oracle CloudWorld in Singapore! The new Oracle Database 23c Free – Developer Release is now available. Oracle Database 23c Free – Developer Release is the first release of the next-generation Oracle Database, … Continue reading →]]><![CDATA[
Hi everyone! Big news today, just announced at Oracle CloudWorld in Singapore!
The new Oracle Database 23c Free – Developer Release is now available.
Oracle Database 23c Free – Developer Release is the first release of the next-generation Oracle Database, allowing developers a head-start on building applications with innovative 23c features that simplify development of modern data-driven apps. The entire feature set of Oracle Database 23c is planned to be generally available within the next 12 months.
It has heaps of new developer-focused features and its completely free! And easy to download and use!
My two favorite features are:
the new JSON Relational Duality Views which allow you to create a JSON document representation from a number of existing tables, and they are read/write! So you can use JSON in your applications and have the underlying data stored in relational tables. Of course you can store it in JSON too if you want to!
JavaScript Stored Procedures, or as I like to think of them – in-database microservices which can scale to zero, with fast startup and scaling, and resource management to prevent noisy neighbors!
I look forward to writing posts about those, and some other exicting new features really soon.
]]>https://redstack.dev/2023/04/03/big-news-today-oracle-database-23c-free-developer-release-just-released/feed/1markxnelsonSession catalog for DevLive Level Up 2023 released!
https://redstack.dev/2023/03/01/session-catalog-for-devlive-level-up-2023-released/
https://redstack.dev/2023/03/01/session-catalog-for-devlive-level-up-2023-released/#respond<![CDATA[Mark Nelson]]>Wed, 01 Mar 2023 19:53:48 +0000<![CDATA[Uncategorized]]><![CDATA[DevLive]]><![CDATA[flutter]]><![CDATA[level up]]><![CDATA[microservices]]><![CDATA[parse]]><![CDATA[Spring]]><![CDATA[springboot]]>http://redstack.wordpress.com/?p=3863<![CDATA[Hi again! In this earlier post, I mentioned that I am speaking at Level Up 2023. The session catalog has just been released on the event website. You can find my sessions in this stream: Data strategies for developers – … Continue reading →]]><![CDATA[
Hi again! In this earlier post, I mentioned that I am speaking at Level Up 2023. The session catalog has just been released on the event website. You can find my sessions in this stream:
]]>https://redstack.dev/2023/03/01/session-catalog-for-devlive-level-up-2023-released/feed/0markxnelsonI’m speaking at Level Up 2023
https://redstack.dev/2023/02/17/im-speaking-at-level-up-2023/
https://redstack.dev/2023/02/17/im-speaking-at-level-up-2023/#comments<![CDATA[Mark Nelson]]>Fri, 17 Feb 2023 20:03:00 +0000<![CDATA[Uncategorized]]><![CDATA[conductor]]><![CDATA[eureka]]><![CDATA[flutter]]><![CDATA[level up]]><![CDATA[oracle]]><![CDATA[parse]]><![CDATA[redwood shores]]><![CDATA[Spring]]><![CDATA[springboot]]>http://redstack.wordpress.com/?p=3858<![CDATA[Hi! I am going to be speaking at the Level Up 2023 event at Oracle Redwood Shores in March. I will talking about our new Developer Previews for both Oracle Backend for Spring Boot and Oracle Backend for Parse Platform, … Continue reading →]]><![CDATA[
Hi! I am going to be speaking at the Level Up 2023 event at Oracle Redwood Shores in March. I will talking about our new Developer Previews for both Oracle Backend for Spring Boot and Oracle Backend for Parse Platform, and running a hands on lab where we will use those to build a “Cloud Banking” application in Spring Boot complete with a web and mobile front end user interface. In the lab we’ll explore topics like service discovery, external configuration, workflow, API management, fault tolerance and observability.
If you’re in the Bay Area and you’d like to attend in person – or if you’d like to attend from anywhere digitally – you can find more information and register here:
]]>https://redstack.dev/2023/02/17/im-speaking-at-level-up-2023/feed/1markxnelsonA first Spring Boot microservice with Oracle
https://redstack.dev/2023/02/03/a-first-spring-boot-microservice-with-oracle/
https://redstack.dev/2023/02/03/a-first-spring-boot-microservice-with-oracle/#respond<![CDATA[Mark Nelson]]>Fri, 03 Feb 2023 18:40:08 +0000<![CDATA[Uncategorized]]><![CDATA[jpa]]><![CDATA[oracle]]><![CDATA[Spring]]><![CDATA[springboot]]>http://redstack.wordpress.com/?p=3773<![CDATA[In this post, I want to walk through creating a first simple Spring Boot microservice using Oracle. If you want to follow along, see this earlier post about setting up a development environment. I want to create a “customer” microservice … Continue reading →]]><![CDATA[
I want to create a “customer” microservice that I can use to create/register customers, and to get customer details. I want the customer information to be stored in my Oracle database. I am going to create a dedicated schema for this microservice, where it will keep its data. I could create a separate pluggable database, but that seems a little excessive given the simplicity of this service.
So my “customer” data will have the following attributes:
Customer ID
First name
Surname
Email address
My service will have endpoints to:
Create a customer
List all customers
Get a customer by ID
I am going to use Spring 3.0.0 with Java 17 and Maven. Spring 3.0.0 was just released (when I started writing this post) and has support for GraalVM native images and better observability and tracing.
Create the project
Let’s start by creating a project. If you set up your development environment like mine, with Visual Studio Code and the Spring Extension Pack, you can type Ctrl+Shift+P to bring up the actions and type in “Spring Init” to find the “Spring Initializr: Create a Maven project” action, then hit enter.
It will now ask you a series of questions. Here’s how I set up my project:
Spring Boot Version = 3.0.0
Language = Java
Group ID = com.redstack
Artifact ID = customer
Packaging = JAR
Java version = 17
Dependencies:
Spring Web
Spring Data JPA
Oracle Driver
After that, it will ask you which directory to create the project in. Once you answer all the questions, it will create the project for you and then give you the option to open it (in a new Visual Studio Code window.)
Note: If you prefer, you can go to the Spring Initializr website instead and answer the same questions there instead. It will then generate the project and give you a zip file to download. If you choose this option, just unzip the file and open it in Visual Studio Code.
Whichever approach you take, you should end up with a project open in Code that looks a lot like this:
I like to trim out a few things that we don’t really need. I tend to delete the “.mvn” directory, the “mvnw” and “mvnw.cmd” files and the “HELP.md” file. Now is also a great time to create a git repository for this code. I like to add/commit all of these remaining files and keep that as my starting point.
Explore the generated code
Here’s the Maven POM (pom.xml) that was generated:
There’s a few things to note here. The parent is the standard spring-boot-starter-parent and this will bring in a bunch of useful defaults for us. The dependencies list contains the items we chose in the Spring Initializr (as expected) and finally, note the build section has the spring-boot-maven-plugin included. This will let us build and run the Spring Boot application easily from maven (with “mvn spring-boot:run“).
Lombok offers various annotations aimed at replacing Java code that is well known for being boilerplate, repetitive, or tedious to write. We’ll use it to avoid writing getters, setters, constructors and builders.
And here is the main CustomerApplication Java class file:
package com.redstack.customer;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
@SpringBootApplication
public class CustomerApplication {
public static void main(String[] args) {
SpringApplication.run(CustomerApplication.class, args);
}
}
Nothing much to see here. Notice it has the SpringBootApplciation annotation.
Define the Customer Entity
Let’s go ahead and define our data model now. Since we are using JPA, we define our data model using a POJO. Create a Customer.java file in src/main/java/com/redstack/customer with this content:
Starting from the bottom, we see the definition of the four fields that we wanted for our Customer entity – ID, first and last names, and email address.
The id field has some annotations on it. First it has @Id which identifies it as the key. Then we have a @SequenceGenerator annotation, which tells JPA that we want to create a “sequence” in the database and gives it a name. A sequence is a database object from which multiple users may generate unique integers. The last annotation, @GeneratedValue tells JPA that this field should be populated from that sequence.
The class also has some annotations on it. It has the JPA @Entity annotation which tells JPA that this is an entity that we want to store in the database. The other annotations are Lombok annotations to save us writing a bunch of boilerplate code. @Data generates getters for all fields, a useful toString method, and hashCode and equals implementations that check all non-transient fields. It will also generate setters for all non-final fields, as well as a constructor. @Builder generates some nice APIs to create instances of our object – we’ll see how we use it later on. And @AllArgsConstructor and @NoArgsConstructor generate pretty much what their names suggest they do.
Set up the Spring Boot Application Properties
Ok, next let’s set up the JPA configuration in the Spring Boot Application Properties. You will find a file called application.properties in src/main/resources. This file can be in either the “properties” format, or in YAML. I personally prefer to use YAML, so I renamed that file to application.yaml and here is the content:
Let’s look at what we have here. First we set the port to 8080, and the application’s name to “customer”. If you prefer to use the properties format these first two setting would like like this:
server.port=8080
spring.application.name=customer
After that we set up the data source. You can provide the JDBC URL for your Oracle Database, and the username and password and the JBDC driver class, as shown. Note that the use will need to actually exist. You can create the user in the database by running these statements as an admin user:
create user customer identified by Welcome123;
grant connect, resource to customer;
alter user customer quota unlimited on users;
commit;
The final section of config we see here is the JPA configuration where we need to declare which “dialect” we are using – this identifies what kind of SQL should be generated, in our case Oracle. The format-sql and show-sql settings are jsut there to make the SQL statements we see in logs easier for us to read.
The ddl-auto setting is interesting. Here’s a good article that explains the possible values and what they do. We’ve used update in this example, which “instructs Hibernate to update the database schema by comparing the existing schema with the entity mappings and generate the appropriate schema migration scripts.” That’s a resonable choice for this scenario, but you shoudl be aware that there are probably better choices in some cases. For example, if you are actively developing the entity and making changes to it, create-drop might be better for you. And if the database objects already exist and you just want to use them, then none might be the best choice – we’ll talk more about this in a future post!
Create the JPA Repository Class
Next, let’s create the JPA Repository class which we can use to save, retrieve and delete entities in/from the database. Create a file called CustomerRepository.java in src/main/java/com/redstack/customer with this content:
Ok, that takes care of our JPA work. Now, let’s get started on our services.
Create the Customer Service
Let’s start with a service to register (create) a new customer. We can start by defining the input data that we expect. Let’s create a CustomerRegistrationRequest.java in the same directory with this content:
package com.redstack.customer;
public record CustomerRegistrationRequest(
String firstName,
String lastName,
String email) {
}
Notice that we did not include the ID, because we are going to get that from the database sequence. So we just need the client/caller to give us the remaining three fields.
Next, we can create our controller. Create a new file called CustomerController.java in the same directory with this content:
So here we used a Java record to define the controller, and we ask Spring to inject the CustomerService for us. Obviously, we have not created that yet, we’ll get to that in a minute! The reocrd has two annotations – @RestController tells spring to expose a REST API for this record, and @RequestMapping lets us set up the URL path for this controller. Since we set the port to 8080 earlier, and assuming we just run this on our development machine for now, this REST API will have a URL of http://localhost:8080/api/v1/customers.
Next we can define the handlers. Here we have just the first one, to handle HTTP POST requests. We will add others later. Our registerCustomer method will be exposed as the handler for POST requests, because we gavt it the @PostMapping annotation, and it will be able to return an HTTP response with a status code and body becauase we gave it the @RepsonseBody annotation. This method accepts the CustomerRegistrationRequest that we defined earlier. Notice that we add the @RequestBody annotation to that method argument. This tells Spring that the data will be provided by the caller as JSON in the HTTP Request Body (as opposed to being in a query or header, etc.) And this handler simply calls the registerCustomer method in the service and passes through the data.
So, its time to write the service! Create a new file called CusotmerService.java in the same directory with this content:
package com.redstack.customer;
import org.springframework.stereotype.Service;
@Service
public record CustomerService(CustomerRepository repository) {
public void registerCustomer(CustomerRegistrationRequest req) {
Customer customer = Customer.builder()
.firstName(req.firstName())
.lastName(req.lastName())
.email(req.email())
.build();
repository.saveAndFlush(customer);
}
}
Again, we are using a Java record for the service. Records are immutable data classes that require only the type and name of fields. The equals, hashCode, and toString methods, as well as the private,final fields and public constructor, are generated by the Java compiler. You can also include static variables and methods in records. I’m using them here to save a bunch of boilerplate code that I do not want to write.
We put the @Service annotation on the record to tell Spring that this is a service. In the record arguments, we have Spring inject an instance of our CustomerRepository which we will need to talk to the database.
For now, we just need one method in our service, registerCustomer(). We’ll add more later. This method also accepts the CustomerRegistrationRequest and the first thing we do with it is create a new Customer entity object. Notice that we are using the builder that we auto-generated with Lombok – we never wrote any code to create this builder! Yay! Then, all we need to do is use our JPA repository’s saveAndFlush() method to save that customer in the database. saveAndFlush will do an INSERT and then a COMMIT in the database.
Time to test the application!
Let’s start up our service and test it! Before we start, you might want to connect to your database and satisfy yourself that there is no CUSTOMER table there:
sql customer/Welcome123@//172.17.0.2:1521/pdb1
SQL> select table_name from user_tables;
no rows selected
To run the service, run this Maven command:
mvn spring-boot:run
This will compile the code and then run the service. You will see a bunch of log messages appear. In around the middle you should see something like this:
2023-02-03T11:15:37.827-05:00 INFO 8488 --- [ main] SQL dialect : HHH000400: Using dialect: org.hibernate.dialect.OracleDialect
Hibernate: create global temporary table HTE_customer(id number(10,0), email varchar2(255 char), first_name varchar2(255 char), last_name varchar2(255 char), rn_ number(10,0) not null, primary key (rn_)) on commit delete rows
Hibernate: create table customer (id number(10,0) not null, email varchar2(255 char), first_name varchar2(255 char), last_name varchar2(255 char), primary key (id))
There’s the SQL that it ran to create the CUSTOMER table for us! If you’d like to, you can check in the database with this statement:
SQL> describe customer;
Name Null? Type
_____________ ___________ _____________________
ID NOT NULL NUMBER(10)
EMAIL VARCHAR2(255 CHAR)
FIRST_NAME VARCHAR2(255 CHAR)
LAST_NAME VARCHAR2(255 CHAR)
You can also take a look at the sequence if you would like to:
Now, let’s invoke the service to test it! We can invoke the service using cURL, we need to do a POST, set the Content-Type header and provide the data in JSON format:
The “-i” tells cURL to pring out the response. You can see that we got a HTTP 201 (created), i.e., success!
Now we see the new record in the database, as expected:
SQL> select * from customer ;
ID EMAIL FIRST_NAME LAST_NAME
_____ ________________ _____________ ____________
1 [email protected] Mark Nelson
Great, that is working the way we wanted, so we can create customers and have them stored in the database. Now let’s add some endpoints to query customers from the database.
Add a “get all customers” endpoint
The first endpoint we want to add will allow us to get a list of all customers. To do this, let’s add this new method to our controller:
Here we have a getAllCustomers() method that simply calls the corresponding method in the service (we’ll write that in a moment) and returns the results. Of course, we have some annotations too. The @GetMapping tells Spring Boot that this method will be exposed as an HTTP GET method handler. The produces defines the output body’s Content-Type, in this case it will be “application/json“. The @ResponseStatus sets the HTTP status code.
Here’s the method we need ot add to our CustomerService, notice it just uses a built-in method on the repository to get the data, its very simple:
// add this import
import java.util.List;
// ...
public List<Customer> getAlCustomers() {
return repository.findAll();
}
With those changes in place, we can restart the service and call this new GET endpoint like this:
You might like to do a few more POSTs and another GET to observe what happens.
Add a “get customer by ID” endpoint
Let’s add the final endpoint that we wanted in our service. We want to be able to get a specific customer using the ID. Here’s the code to add to the controller:
Here we see some differences to the previous endpoint implementation. This one is a little more sophisticated. First, we have added a path to the @GetMapping annotation to add a positional parameter to the end of the path, so this endpoint will be /api/v1/customers/{id}. In the method arguments we have a @PathVariable annotation to grab that {id} from the path and use it as an argument to our method.
Also, notice that the method returns ResponseEntity<Customer>. This gives us some more control over the response, and allows us to set different HTTP status codes (and if we wanted to we could also control the headers, body, etc.) based on our own business logic.
Inside this method we call our service’s (soon to be written) getCustomer(id) method which returns an Optional<Customer>. Then we check if the Optional actually contains a Customer, indicating that a customer entity/record was found for the specified id, and if so we return it along with an HTTP 200 (OK). If the Optional is empty, then return an HTTP 404 (not found).
Here’s the new method to add to the service:
// add this import
import java.util.Optional;
// ...
public Optional<Customer> getCustomer(Integer id) {
return repository.findById(id);
}
This one is fairly sinple, we are just calling a standard built-in method on the JPA Repository class to get the data.
Now we can restart the application, and test the new endpoint by asking for customers that we know exist, and do not exist to observe the different outcomes:
Notice the HTTP status codes are different in each case. Also, notice that the JSON returned when a customer is found is just one JSON object {…} not a list [{…}, … ,{…}] as in the get all customers endpoint.
Conclusion
Well there you have it, we have completed our simple customer microservice built using Spring Boot and Oracle Database. I hope you followed along and built it too, and enjoyed learing a bit about Spring Boot and Oracle! Stay tuned for more posts on this topic, each covering a little more advanced toopic than the last. See you soon!
]]>https://redstack.dev/2023/02/03/a-first-spring-boot-microservice-with-oracle/feed/0markxnelsonTwo new Backend as a Service offerings live now!
https://redstack.dev/2022/12/21/two-new-backend-as-a-service-offerings-live-now/
https://redstack.dev/2022/12/21/two-new-backend-as-a-service-offerings-live-now/#comments<![CDATA[Mark Nelson]]>Wed, 21 Dec 2022 15:56:09 +0000<![CDATA[Uncategorized]]><![CDATA[ebaas]]><![CDATA[parse]]><![CDATA[spingboot]]><![CDATA[Spring]]><![CDATA[spring cloud]]>http://redstack.wordpress.com/?p=3790<![CDATA[Hi everyone! For the last few months I have been working on two projects which have just gone live with their first “Developer Preview” releases. If you’d like to check them out and see what I have been up to, … Continue reading →]]><![CDATA[
Hi everyone!
For the last few months I have been working on two projects which have just gone live with their first “Developer Preview” releases.
If you’d like to check them out and see what I have been up to, have a look at:
It’s been a lot of fun working on these, and I am really happy to be able to tell you about them at last!
The Oracle Mobile Backend as a Service offering is built on top of Parse and lets you easily build mobile and web apps using any of the Parse SDKs and have all your data stored in an Oracle Autonomous Database in JSON collections. It also includes the Parse Dashboard for managing your applicaiton data. Its easy to install from OCI Marketplace and once the install is done, you can start hitting those APIs and building your apps right away!
The Oracle Backend as a Serivce for Spring Cloud lets you easily install a comprehensive runtime environment for Spring Boot applications including a Kubernetes (OKE) cluster, Spring Config Server (with the config data in Oracle Autonomous Database), Spring Eureka Service Registry, APISIX API Gateway and Dashboard, Netflix Conductor, Spring Admin Dashboard, Prometheus, Grafana, Jaeger and Open Telemetry. You can build apps using Spring Data with JPA or JDBC access to the Oracle Autonomous Database. We have included a sample custom Spring component for using Oracle Transactional Event Queueing. There’s a CLI to manage deploying apps into the environment, managing configuration and database schema for services. We also included a set of sample applications that demonstrate how to use the platform – these include service discovery, fault tolerance, distributed tracing and so on.
As “Developer Preview” implies – there’s much more to come in this space!
I am planning to write more blog posts really soon to demonstrate how to use both of these offerings. I hope you’ll check them out!
]]>https://redstack.dev/2022/12/21/two-new-backend-as-a-service-offerings-live-now/feed/1markxnelsonDevelopment environment setup for Spring Boot with Oracle
https://redstack.dev/2022/12/08/development-environment-setup-for-spring-boot-with-oracle/
https://redstack.dev/2022/12/08/development-environment-setup-for-spring-boot-with-oracle/#comments<![CDATA[Mark Nelson]]>Thu, 08 Dec 2022 14:29:27 +0000<![CDATA[Uncategorized]]><![CDATA[Java]]><![CDATA[Maven]]><![CDATA[oracle]]><![CDATA[Spring]]><![CDATA[springboot]]><![CDATA[vscode]]>http://redstack.wordpress.com/?p=3752<![CDATA[Hi again! I am starting a series of posts about writing Spring Boot microservice applications with the Oracle Database, I plan to cover topics like databsae access, messaging, external configuration, service discovery, fault tolerance, workflow, observability and so on. But … Continue reading →]]><![CDATA[
Hi again! I am starting a series of posts about writing Spring Boot microservice applications with the Oracle Database, I plan to cover topics like databsae access, messaging, external configuration, service discovery, fault tolerance, workflow, observability and so on. But before I get started, I wanted to document how I set up my development environment.
Personally, I work on Windows 11 with the Windows Subsystem for Linux and Ubuntu 20.04. Of course you can adjust these instructions to work on macOS or Linux.
$ java -version
java version "17.0.3" 2022-04-19 LTS
Java(TM) SE Runtime Environment (build 17.0.3+8-LTS-111)
Java HotSpot(TM) 64-Bit Server VM (build 17.0.3+8-LTS-111, mixed mode, sharing)
Great! Now, let’s move on to build automation.
Maven
You can use Maven or Gradle to build Spring Boot projects, and when you generate a new project from Spring Initialzr (more on that later) it will give you a choice of these two. Personally, I prefer Maven, so that’s what I document here. If you prefer Gradle, I’m pretty sure you’ll already know how to set it up
These days I find I am using Visual Studio Code for most of my coding. It’s free, lightweight, has a lot of plugins, and is well supported. Of course, you can use a different IDE if you prefer.
Another great feature of Visual Studio Code that I really like is the support for “remote coding.” This lets you run Visual Studio Code itself on Windows but it connects to a remote Linux machine and that’s where the actual code is stored, built, run, etc. This could be an SSH connection, or it could be connecting to a WSL2 “VM” on your machine. This latter option is what I do most often. So I get a nice friendly, well-behaved native desktop applciation, but I am coding on Linux. Kind of the best of both worlds!
You can download Visual Studio Code from its website and install it.
I use a few extensions (plugins) that you will probably want to get too! These add support for the languages and frameworks and give you things like completion and syntax checking and so on:
You can install these by opening the extensions tab (Ctrl-Shift-X) and using the search bar at the top to find and install them.
Containers and Kubernetes
Since our microservices applications are probably almost certainly going to end up running in Kubernetes, its a good idea to have a local test environment. I like to use “docker compose” for initial testing locally and then move to Kubernetes later.
I use Rancher Desktop for both containers and Kubernetes on my laptop. There are other options if you prefer to use something different.
Oracle Database
And last, but not least, you will need the Oracle Database container image so we can run a local database to test against. If you don’t already have it, you will need to go to Oracle Container Registry first, and navigate to “Database,” then “Enterprise,” and accept the license agreement, then pull the image with these commands:
The first time yoiu start it up, it will create a database instacne for you. This takes a few minutes, you can watch the logs to see when it is done:
docker logs -f oracle-db
You will see this message in the logs when it is ready:
#########################
DATABASE IS READY TO USE!
#########################
You can then stop and start the database container as needed – you won’t need to wait for it to create the database instance each time, it will stop and start in just a second or two.
docker stop oracle-db
docker start oracle-db
You are going to want to grab its IP address for later on, you can do that with this command:
docker inspect oracle-db | grep IPAddress
This container image has SQL*Plus in it, and you can use that as a database command line client, but I prefer the new Oracle SQLcl which is a lot nicer – it has completion and arrow key navigation and lots of other cool new features. Here’s a permalink for the latest version: https://download.oracle.com/otn_software/java/sqldeveloper/sqlcl-latest.zip
You can just unzip this and add it to your path too, like Maven and Java.
You can connect to the database using SQLcl like this (use the IP address you got above):
sql sys/Welcome123@//172.12.0.2:1521/pdb1 as sysdba
Well, that’s about everything we need! In the next post we’ll get started building a Spring Boot microservice!
]]>https://redstack.dev/2022/12/08/development-environment-setup-for-spring-boot-with-oracle/feed/1markxnelsonOracle REST Data Services 22.3 brings new REST APIs for Transactional Event Queueing
https://redstack.dev/2022/10/31/oracle-rest-data-services-22-3-brings-new-rest-apis-for-transactional-event-queueing/
https://redstack.dev/2022/10/31/oracle-rest-data-services-22-3-brings-new-rest-apis-for-transactional-event-queueing/#comments<![CDATA[Mark Nelson]]>Mon, 31 Oct 2022 16:01:11 +0000<![CDATA[Uncategorized]]><![CDATA[oracle]]><![CDATA[ORDS]]><![CDATA[REST]]><![CDATA[TEQ]]><![CDATA[txeventq]]>http://redstack.wordpress.com/?p=3718<![CDATA[Oracle REST Data Services 22.3 was released a couple of weeks ago, and it is now available on Oracle Autonomous Database as well! This release has a slew of new REST APIs for Oracle Transactional Event Queueing (or TxEventQ). If … Continue reading →]]><![CDATA[
Oracle REST Data Services 22.3 was released a couple of weeks ago, and it is now available on Oracle Autonomous Database as well! This release has a slew of new REST APIs for Oracle Transactional Event Queueing (or TxEventQ). If you have not heard of it, TxEventQ is essentially a new, faster implementation of Oracle Advanced Queueing which has been in the database for around twenty years.
Many of these new REST APIs are very simliar to the Kafka REST APIs, since TxEventQ provides Kafka compatibility as one of its features.
In this post, I want to show you how to use a few of the APIs, and then I’ll give you an idea of what kinds of APIs are available and where to find more information.
The first thing to do is the grab an Autonomous Database instance. It’s available in the Always Free tier, so you can try this for free! If you are not familiar with creating one, and accessing SQL and so on – check out this free LiveLab for details.
Make sure you get a 21c database – you may need to toggle the “Always Free” option to see 21c. The APIs described in this post are supported in Oracle Database 21c (and later).
When you get into your SQL worksheet, grab the URL from the browser, it will be something like this:
Now, chop off the end of the URL and replace it with the base URL for the TxEventQ REST APIs, and save that in an environment variable to save us some typing!
And let’s create another environment variable with the authentication details. You can encode them using base64 like this, assuming your userid is admin and your passsword is your_password:
Great, that will save us from typing those each time!
Create a topic
Let’s start with by creating a topic. We are going to need to know the database name for this – you can find that by running this query in your SQL worksheet:
select sys_context('userenv','db_name') from dual
You’ll need to put that database name into the URL below after “clusters” and before “topics”, in this example my database name is “XYZABC_RED1“:
In the body we specified the name of the topic (“mark1” in this case) and how many parititions we want the topic to have. When you run this request, you’ll see output something like this:
Now let’s create a consumer group, here’s the request, notice the topic name is in the body, and the name of the consumer group is the last part of the URL (“sub1” in this case):
You can put mutliple records in the body to send put more than one message on the topic.
Consume messages
Now, let’s consume the messages off that topic with our consumer sub1. Here’s the request, notice the topic name is in the body, and the soncumer name is in the URL after “consumers”:
Great, hopefully that gives you a feel for how these REST APIs for TxEventQ work!
But wait, there’s more!
Of course there are a lot more APIs available than the few I have shown you so far. They all follow a fairly similar pattern, let’s take a look at a list of what’s available:
Topics APIs
Create topic, optionally with partition count
List topics
Get a topic
Create a consumer group
Publish message(s)
List topics in a specific cluster
Get a topic in a specific cluster
Delete a topic
Partitions APIs
List paritions in a topic
Get details of one partition in a topic
Get partition message offsets
List partitions in a topic in a cluster
Get details of one partition in a topic in a cluster
Or if you prefer, you can open the OpenAPI specification on your database instance. The URL will be something like this, and you can search the output for “teq” to find the APIs:
I hope you enjoyed this quick introduction to the new REST APIs for Transactional Event Queueing! Of course, this is available in any Oracle database, not just Autonomous Database. If you want to use Oracle REST Data Services with your own database, you might find this post about installing a standalone version interesting too!
]]>https://redstack.dev/2022/10/31/oracle-rest-data-services-22-3-brings-new-rest-apis-for-transactional-event-queueing/feed/3markxnelsonGetting started with the new observability exporter for Oracle database
https://redstack.dev/2022/09/27/getting-started-with-the-new-observability-exporter-for-oracle-database/
https://redstack.dev/2022/09/27/getting-started-with-the-new-observability-exporter-for-oracle-database/#respond<![CDATA[Mark Nelson]]>Tue, 27 Sep 2022 16:05:41 +0000<![CDATA[Uncategorized]]><![CDATA[database]]><![CDATA[metrics]]><![CDATA[observability]]><![CDATA[oracle]]><![CDATA[Prometheus]]>http://redstack.wordpress.com/?p=3681<![CDATA[My colleague Paul Parkinson recently published our new unified obserability exporter for Oracle Database on GitHub, you can read about it here. I wanted to start playing around with it to see what we can do with it. In this … Continue reading →]]><![CDATA[
My colleague Paul Parkinson recently published our new unified obserability exporter for Oracle Database on GitHub, you can read about it here. I wanted to start playing around with it to see what we can do with it.
In this post I will start with a really simple example that just gets the exporter up and running and collects a few simple metrics from the database into Prometheus. In subsequent posts, I’ll go further and look at dashboards in Grafana, and also cover the logging and metrics capabilities! But you have to start somewhere right!
First thing we need is a database of course! I just fired one up in a container like this:
If you have not used this image before, you will first need to go to Oracle Container Registry at https://container-registry.oracle.com, log in, and navigate to the Database category and then the “enterprise” image and accept the license agreement. You will also need to login your docker client so you can pull the image:
docker login container-registry.oracle.com
# this will prompt you for your username and password
The image will take a short time to pull the first time, and the first startup will actually create the database instance, and that takes a few minutes too. You can watch the logs to see when the database is ready:
docker logs -f oracle-db
You only need to have these delays the first time you start the image. After that you can stop and start the container as needed and it will retain the data and startup very quickly.
# to stop the container:
docker stop oracle-db
# to start the container:
docker start oracle-db
Ok, so now we have a database available. Let’s connect to it and create some data to play with. You can use your favorite client – there’s SQL*Plus in that image if you don’t have anything else available. You can start it and connect to the database like this:
If you don’t have those installed and you don’t want to – you can skip this step and just grab a pre-built container image from Oracle Container Registry:
If you do it this way, make sure to use the full name later when we start the exporter, not the short version!
Now we need to create a configuration file and define our metrics. I called mine mark-metrics.toml and here’s the content:
[[metric]]
context = "customers"
request = "SELECT count(*) as num_custs FROM customer"
metricsdesc = { num_custs = "Number of customers." }
[[metric]]
context = "system"
request = "select count(*) as session_count from v$session where username is not null and type = 'USER' and con_id = sys_context('userenv','con_id')"
metricsdesc = { session_count = "Current session count." }
[[metric]]
context = "system"
request = "select count(*) as active_sessions from v$session where username is not null and type = 'USER' and status = 'ACTIVE' and con_id = sys_context('userenv','con_id')"
metricsdesc = { active_sessions = "Active sessions." }
[[metric]]
context = "system"
request = "select (c.session_count - a.active_sessions) as inactive_sessions from (select count(*) as session_count from v$session where username is not null and type = 'USER' and con_id = sys_context('userenv','con_id')) c, (select count(*) as active_sessions from v$session where username is not null and type = 'USER' and status = 'ACTIVE' and con_id = sys_context('userenv','con_id')) a"
metricsdesc = { inactive_sessions = "Inactive sessions." }
[[metric]]
context = "system"
request = "select b.session_count as blocked_sessions from (select count(*) as session_count from v$session where username is not null and type = 'USER' and blocking_session_status = 'VALID' and con_id = sys_context('userenv','con_id')) b"
metricsdesc = { blocked_sessions = "Blocked sessions." }
I defined five metrics in this file. Each metric starts with the [[metric]] heading and can have several fields. You can see more information in the documentation here. In the spirit of keeping this first post simple, I just created basic metrics with no labels or anything fancy
Let’s take a close look at the first metric, here it is again:
[[metric]]
context = "customers"
request = "SELECT count(*) as num_custs FROM customer"
metricsdesc = { num_custs = "Number of customers." }
It is in the context (or group) called customers. The metric itself is called num_custs. You can see how we use the metricsdesc to create a human-readable documentation/description for the metric. And the metric itself is defined with an SQL statement. Wow! That’s pretty cool, right? That means that anything I can write an SQL statement to get from the database can be exported as a metric! In this one I just count the number of entries in that customer table we just created.
The other four metrics are some simple queries that get the number of sessions in the database as well as how many are active, inactive and blocked. These are all in the system context. You can define whatever contexts you like.
When you later look at a metric in Prometheus its name will be something like this:
oracledb_customers_num_custs
Notice how the context (customers) and the metric name (num_custs) are in there.
Ok, now that we have defined our metrics, we can start up the exporter. Let’s run it in another container, alongside the database. We can start it like this:
There’s a couple of things to note here. First, I am providing the configuration file we just created using the -v mount. This will give the exporter access to the metrics definitions. Second, we need to tell it how to connect to the database. You’ll need to get the IP address of the database container using this command:
docker inspect oracle-db | grep IPAddress
Yours will probably be diffrent to mine, so you’ll need to update the value of DATA_SOURCE_NAME to match your environment. And finally, a reminder – if you pulled the pre-built image down from Oracle Container Registry, you’ll need to use the fully qualified name on the last line.
Once this container starts up, grab its IP address too, we’ll need that in a minute:
docker inspect exporter | grep IPAddress
The exporter should start right up, and assuming we got the address right and no typos, it should be working and we can get metrics like this:
$ curl localhost:9161/metrics
# HELP oracledb_system_inactive_sessions Inactive sessions.
# TYPE oracledb_system_inactive_sessions gauge
oracledb_system_inactive_sessions 1.0
# HELP oracledb_up Whether the Oracle database server is up.
# TYPE oracledb_up gauge
oracledb_up 1.0
# HELP oracledb_system_blocked_sessions Blocked sessions.
# TYPE oracledb_system_blocked_sessions gauge
oracledb_system_blocked_sessions 0.0
# HELP oracledb_customers_num_custs Number of customers.
# TYPE oracledb_customers_num_custs gauge
oracledb_customers_num_custs 2.0
# HELP oracledb_system_active_sessions Active sessions.
# TYPE oracledb_system_active_sessions gauge
oracledb_system_active_sessions 1.0
# HELP oracledb_system_session_count Current session count.
# TYPE oracledb_system_session_count gauge
oracledb_system_session_count 2.0
If you don’t see this, check the container logs to see what the error was:
docker logs exporter
Assuming everything is working now, let’s start up Prometheus and configure it to scrape these metrics.
First, let’s create a configuration file called prometheus.yml with this content:
The user interface looks like this, and you can type into that search field to find a metric. If you start typing “num_custs” it should find our metric. Then hit enter, or click on the Execute button to see the value of the metric. It might take up to 10 seconds for data to be available, since we configured the scrape interval as 10 seconds in our configuration file. You should see something like this – yours will probably say 1, not 2:
If you go insert some more records into that table and then check again, you’ll see the value is updated. You can also click on the Graph tab to view that as a time series graph. Try adding and removing records to see what happens. Remember to wait a little while between each update so that new metrics are collected.
You can also try the other metrics we created! So there we go, that’s covered the very basic starting steps of defining some metrics, running the exporter and scraping the metrics into Prometheus! Stay tuned for some follow up posts where I will build dashboards in Grafana, and also look at exporting logs and distributed tracing!
Bonus info: If you use WSL2 like I do, you might see a warning on the Prometheus web user interface about clock skew. If you do, you can fix that by updating the time in WSL like this:
sudo hwclock -s
]]>https://redstack.dev/2022/09/27/getting-started-with-the-new-observability-exporter-for-oracle-database/feed/0markxnelsonNew web page for Oracle Transactional Event Queueing
https://redstack.dev/2022/08/05/new-web-page-for-oracle-transactional-event-queueing/
https://redstack.dev/2022/08/05/new-web-page-for-oracle-transactional-event-queueing/#respond<![CDATA[Mark Nelson]]>Fri, 05 Aug 2022 17:25:29 +0000<![CDATA[Uncategorized]]><![CDATA[TEQ]]>http://redstack.wordpress.com/?p=3678<![CDATA[The new web page for Oracle Transactional Event Queueing is live and has lots of great information including sample code, links to hands-on labs, documentation and some user stories! Hope you can check it out!]]><![CDATA[
The new web page for Oracle Transactional Event Queueing is live and has lots of great information including sample code, links to hands-on labs, documentation and some user stories! Hope you can check it out!
]]>https://redstack.dev/2022/08/05/new-web-page-for-oracle-transactional-event-queueing/feed/0markxnelsonCreating a stored procedure (dare I call it a microservice?) to automatically process events on a queue
https://redstack.dev/2022/06/23/creating-a-stored-procedure-dare-i-call-it-a-microservice-to-automatically-process-events-on-a-queue/
https://redstack.dev/2022/06/23/creating-a-stored-procedure-dare-i-call-it-a-microservice-to-automatically-process-events-on-a-queue/#respond<![CDATA[Mark Nelson]]>Thu, 23 Jun 2022 17:14:43 +0000<![CDATA[Uncategorized]]><![CDATA[microservice]]><![CDATA[Notification]]><![CDATA[Subscriber]]><![CDATA[TEQ]]>http://redstack.wordpress.com/?p=3650<![CDATA[In this post I want to look at how to create a stored procedure in the database to automatically process events as they are produced on a Transactional Event Queue (TEQ). Having a small, discrete piece of code that processes … Continue reading →]]><![CDATA[
In this post I want to look at how to create a stored procedure in the database to automatically process events as they are produced on a Transactional Event Queue (TEQ).
Having a small, discrete piece of code that processes events off a queue is a pretty common use case. You could even call it a microservice I guess since it does meet the well-established criteria of having its own code base, being loosely coupled, independently deployable and testable. One thing I find really interesting about writing a “microservice” like this and deploying it in the Oracle Database is that I can essentially scale it to zero instances, and it will only run (and consume resources) when there is actually work for it to do. I could also use the Database Resource Manager to control how many resources it is able to consume if I wanted to And it would not be all that hard to instrument it so I could get logs, metrics and even distributed tracing – but that’s another story!
So, let’s go ahead and build this thing!
We’ll start with a new Oracle Database. I am going to run it in a Docker container using the standard image provided on Oracle Container Registry. If you have not used it before, you will need to login to Oracle Container Registry at https://container-registry.oracle.com and then navigate to the “Database” section, and then “enterprise” and read and accept the license.
Make sure you are logged into Oracle Container Registry in your Docker client too:
It will take a few minutes (the first time only) for the database files to be created and the instance to start up. You can watch the logs, or use this little shell command to do the waiting for you:
while ! docker logs oracle-db | grep -q "DATABASE IS READY TO USE!";
do
sleep 10
done
Great, now we have a database running, let’s set up the necessary permissions for our user. I am going to use the pdbadmin user in the PDB1 pluggable database. So let’s give that user permissions to use the TEQ packages (I am using the new SQLcl command line tool, but you can use SQL*Plus or SQL Developer, or whatever tool you prefer):
# sql sys/Welcome123##@//localhost:1521/pdb1 as sysdba
SQL> alter session set container = pdb1;
SQL> grant dba to pdbadmin;
SQL> grant execute on dbms_aqadm to pdbadmin;
SQL> grant execute on dbms_aq to pdbadmin;
SQL> commit;
SQL> exit
Ok, now we can connect with our pdbadmin user and start setting up our environment:
# sql pdbadmin/Welcome123##@//localhost:1521/pdb1
First we want to create our (TEQ) queue (or topic) and start it. We’ll call the queue my_teq:
We’ll keep our microservice super-simple for this demonstration, we’ll just have it record the messages it receives in an “output” table – so let’s create that table now:
create table my_log (
message varchar(256),
when timestamp(6)
);
Ok, so here is our consumer microservice:
create or replace procedure receiver (
context in raw,
reginfo in sys.aq$_reg_info,
descr in sys.aq$_descriptor,
payload in varchar2,
payloadl in number
) as
dequeue_options dbms_aq.dequeue_options_t;
message_properties dbms_aq.message_properties_t;
message_handle raw ( 16 ) ;
message sys.aq$_jms_text_message;
no_messages exception;
pragma exception_init ( no_messages, -25228 ) ;
begin
dequeue_options.msgid := descr.msg_id;
dequeue_options.consumer_name := descr.consumer_name;
dequeue_options.navigation := dbms_aq.first_message;
loop
dbms_aq.dequeue (
queue_name => 'my_teq',
dequeue_options => dequeue_options,
message_properties => message_properties,
payload => message,
msgid => message_handle
);
insert into my_log values ( message.text_vc, sysdate );
commit;
end loop;
exception
when no_messages then
dbms_output.put_line ( 'No more messages for processing' ) ;
commit;
end;
/
Let’s walk through that and talk about the details. First, the procedure must have this signature/interface:
procedure receiver (
context in raw,
reginfo in sys.aq$_reg_info,
descr in sys.aq$_descriptor,
payload in varchar2,
payloadl in number
)
The name of the procedure is up to you, but it must have those exact parameters in that order, since this is a callback, and the TEQ notification is expecting this signature so that it can pass the data about new messages to the consumer.
When we get the callback, we need to perform a dequeue operation to get the actual message/event off the queue/topic. Since it is possible that there is more than one, its a good idea to use a loop to read and process all of them before we complete. Here we have a simple loop to dequeue a message and then save the details in our log table:
We’ve also defined an exception handler for when there are no messages available (though this should not ever happen, but its still a good practice to cater for it anyway):
when no_messages then
dbms_output.put_line ( 'No more messages for processing' ) ;
commit;
I used the JMS message format in this example, but of course you could use RAW or JSON or a user-defined type instead.
Ok, so now that our microservice is ready, we need to tell the database to call it when there is a message to process. To do this, we create a notification as follows:
Ok, so let’s talk about what is happening here. This register function that we are running will set up the connection between the queue, the subscriber and the consumer. In the aq$_reg_info you can see the first parameter has the queue name followed by a colon and the subscriber name – so this is telling us “when we have a message on my_teq and it is addressed to the subscriber my_subscriber…”
The next parameter tells us that we are interested in AQ (and TEQ) notifications, and the third parameter tells us the callback address. In this case we are telling it to run the PL/SQL procedure called receiver.
Once that is done, you can check on the details with this query:
select r.reg_id, subscription_name, location_name, num_ntfns, num_pending_ntfns
from USER_SUBSCR_REGISTRATIONS r, V$SUBSCR_REGISTRATION_STATS s
where r.reg_id = s.reg_id;
REG_ID SUBSCRIPTION_NAME LOCATION_NAME NUM_NTFNS NUM_PENDING_NTFNS
______ ______________________________________ ________________ _________ _________________
301 "PDBADMIN"."MY_TEQ":"MY_SUBSCRIBER" plsql://receiver 40 0
If you come back and run this again later, you will see the number of notifications sent, and pending (i.e. the last two columns) will increase each time we send a message.
Ok, let’s enqueue a message (publish an event) to test this out!
We can use this command to send a test message. This creates and sends a JMS message on our my_teq queue/topic addressed to our my_subscriber consumer:
Once that is run, the notification will kick in, and the callback will occur and our microservice will run, and consume all of the messages and dump them out into our “output table.” We can check the results with this query:
SQL> select * from my_log;
MESSAGE WHEN
__________________ __________________________________
hello from mark 23-JUN-22 04.44.06.000000000 PM
Feel free to go run that a few more times to see what happens.
So there we go! We created a nice simple, loosely coupled consumer that will process messages/events as they arrive, and will scale to zero (consume no resources) when there is no work to do. Enjoy!
]]>https://redstack.dev/2022/06/23/creating-a-stored-procedure-dare-i-call-it-a-microservice-to-automatically-process-events-on-a-queue/feed/0markxnelsonJava to get go-routine-like virtual threads!
https://redstack.dev/2022/06/23/java-to-get-go-routine-like-virtual-threads/
https://redstack.dev/2022/06/23/java-to-get-go-routine-like-virtual-threads/#respond<![CDATA[Mark Nelson]]>Thu, 23 Jun 2022 14:44:50 +0000<![CDATA[Uncategorized]]><![CDATA[Java]]><![CDATA[JVM]]><![CDATA[thread]]>http://redstack.wordpress.com/?p=3646<![CDATA[Yay! Java is finally going to get some lightweight threads, a bit like go-routines, which allow you to create threads in the JVM without each one consuming an OS thread! I’m looking forward to trying it out in Java 19. … Continue reading →]]><![CDATA[
Yay! Java is finally going to get some lightweight threads, a bit like go-routines, which allow you to create threads in the JVM without each one consuming an OS thread!
]]>https://redstack.dev/2022/06/23/java-to-get-go-routine-like-virtual-threads/feed/0markxnelsonSome big updates for the Python Oracle library (cx_Oracle)
https://redstack.dev/2022/05/26/some-big-updates-for-the-python-oracle-library-cx_oracle/
https://redstack.dev/2022/05/26/some-big-updates-for-the-python-oracle-library-cx_oracle/#respond<![CDATA[Mark Nelson]]>Thu, 26 May 2022 13:46:58 +0000<![CDATA[Uncategorized]]><![CDATA[oracle]]><![CDATA[python]]>http://redstack.wordpress.com/?p=3642<![CDATA[There are some really interesting updates for the open source Python Oracle library (known as cx_oracle, but changing its name is part of this) – check it out here: https://cjones-oracle.medium.com/open-source-python-thin-driver-for-oracle-database-e82aac7ecf5a]]><![CDATA[
There are some really interesting updates for the open source Python Oracle library (known as cx_oracle, but changing its name is part of this) – check it out here:
]]>https://redstack.dev/2022/05/26/some-big-updates-for-the-python-oracle-library-cx_oracle/feed/0markxnelsonCross-region event propagation with Oracle Transactional Event Queues
https://redstack.dev/2022/05/24/cross-region-event-propagation-with-oracle-transactional-event-queues/
https://redstack.dev/2022/05/24/cross-region-event-propagation-with-oracle-transactional-event-queues/#comments<![CDATA[Mark Nelson]]>Tue, 24 May 2022 16:49:56 +0000<![CDATA[Uncategorized]]><![CDATA[JMS]]><![CDATA[OCI]]><![CDATA[propagation]]><![CDATA[TEQ]]>http://redstack.wordpress.com/?p=3527<![CDATA[In this post I want to demonstrate how to use Oracle Transactional Event Queues (TEQ) to propagate messages/events across regions. I will use two Oracle Autonomous Databases running in Oracle Cloud, one in Ashburn, VA and one in Phoenix, AZ … Continue reading →]]><![CDATA[
In this post I want to demonstrate how to use Oracle Transactional Event Queues (TEQ) to propagate messages/events across regions. I will use two Oracle Autonomous Databases running in Oracle Cloud, one in Ashburn, VA and one in Phoenix, AZ (about 2,000 miles apart).
Of course, there are a lot of reasons why you might want to propagate events like this, and you don’t necessarily have to do it across geographic regions, you might just want to do it across two database instances in the same data center, or even just two topics in the same database!
Here’s a quick diagram of what we are going to build. We are going to use the JMS Pub/Sub model. Our producer will connect to the ASHPROD1 instance and put messages onto the topic ASH_TOPIC. Messages will be propagated from this topic to the topic PHX_TOPIC in the PHXPROD1 instance. Our consumer will connect to PHXPROD1 and consume messages from there.
To get started, let’s create two databases. To follow along, you’ll need an Oracle Cloud account – you can do this with the “Always Free” account using the 21c version of the Autonomous Database, so you can try this without spending any money You can also use 19c if you prefer.
Creating the databases
First we log into the Oracle Cloud Infrastructure (OCI) Console at https://cloud.oracle.com. Enter your cloud account name and hit the “Next” button.
After you log in, click on the “hamburger” (three lines) menu (1) and go to “Oracle Database” (2) and then “Autonomous Database” (3) as shown:
Choose your compartment (1), and the region (2) (I’ll use Ashburn and Phoenix – use any two regions you like, or two in the same region will work too), then click on the “Create Autonomous Database” (3) button:
In the dialog, we need to give the database a name, I used ASHPROD1. Choose “Transaction Processing” as the workload type and “Shared Infrastructure” as the deployment type:
You can accept the default 19c database (or toggle that “Always Free” switch to use 21c). The default 1 OCPU, 1 TB is fine for this exercise. Also provide a password for the administrator (don’t forget it!):
In the “Choose network access” section, choose the option for secure access and click on the “Add My IP Address” button. Choose the “Bring You Own License (BYOL)” option and provide an email address for the administrator:
Then click on the “Create Autonomous Database” button to create the database.
Now choose the second region, e.g. Phoenix, in the top right corner of the OCI Console and repeat this same process to create a second database, for example called PHXPROD1. This time though, choose the “secure access from anywhere” option, since we are going to need to be able to have ASHPROD1 connect to this instance too.
Obtain Database Wallets
So now we have our two databases. Let’s download the wallets so that we can connect to them. The database wallets contain the necessary information to connect to, and authenticate the database.
In the OCI Console, click on the database name to see the details of the database:
Next, click on the “DB Connection” button:
You will click on the “Download wallet” button (1) to get the wallet file, but while you are here, notice the connection strings (2) – we’ll use one of those later.
After you click on the button, provide a password for the wallet, and then click on the “Download” button:
Repeat this for the other database.
Creating our consumer
Let’s create a new project and write our consumer code. We’ll use Maven to simplify the dependency management and to make it easy to run our consumer. Let’s create a new directory and unzip our two wallets into this directory. So we should see something like this:
Let’s add a Maven POM file to set up our project. I am assuming you have Maven and a JDK installed. If not – go get those now I am using Maven 3.8.4 and Java 17.0.3. Create a file called pom.xml with this content:
This defines the Maven coordinates for our project, the dependencies we need to compile and run our code, and also a convenience goal to run the consumer (or producer) directly from Maven so that we don’t have to worry about constructing the class path manually. Let’s also create some directories to store our code:
This defines the URL that we will use to connect to the database. Notice that it is using an alias (phxprod1_high) – that might look familiar, remember we saw those on the OCI Console when we were downloading the wallet. If you take a look at the tnsnames.ora file in the PHXPROD1 wallet you will see how this is defined, something like this:
Notice that we are reading the password from an environment variable – so you’ll need to set that variable wherever you are going to run this (note – this is not my real password, just an example):
export DB_PASSWORD=Welcome123##
Next we set up a TopicConnection, start a JMS Session, look up our Topic and create a Subscriber. This is all fairly standard JMS stuff
// create a JMS topic connection and session
TopicConnectionFactory tcf = AQjmsFactory.getTopicConnectionFactory(ds);
TopicConnection conn = tcf.createTopicConnection();
conn.start();
TopicSession session = (AQjmsSession)
conn.createSession(true, Session.AUTO_ACKNOWLEDGE);
// create a subscriber on the topic
Topic topic = ((AQjmsSession) session).getTopic(username, topicName);
AQjmsTopicSubscriber subscriber =
(AQjmsTopicSubscriber) session.createDurableSubscriber(topic, "BOOK");
System.out.println("Waiting for messages...");
I created a Durable Subscriber and named it BOOK. We’ll see that name again later, remember that!
Finally, we are going to just wait for messages forever and print them out.
// wait forever for messages to arrive and print them out
while (true) {
AQjmsTextMessage message = (AQjmsTextMessage) subscriber.receive(1_000);
if (message != null) {
if (message.getText() != null) {
System.out.println(message.getText());
} else {
System.out.println();
}
}
session.commit();
}
Normally, we would not wait forever, and we’d clean up our resources, but since this is just a small example consumer, we’ll make some allowances
Ok, that takes care of our consumer. We won’t run it yet, since we have not created the topics. Let’s do that now!
Create the topics
We are going to create two topics, one in each database instance/region, and configure propagation between them. Let’s review what we want:
Ashburn (producer side)
Phoenix (consumer side)
ASHPROD1 database instance
PHXPROD1 database instance
ASH_TOPIC topic
PHX_TOPIC topic
Navigate back to your ASHPROD1 Autonomous Database in the OCI Console and click on the “Database Actions” button:
Note that your browser might think this is a pop-up and block it. If so, clicking on the button again usually lets the browser know you really meant to open it
In the Database Actions page, click on the “SQL” card to open the SQL Worksheet:
If you get the tour, you can click on “Next” or the “X” to close it.
We are just going to create our topics in the ADMIN schema. In real life, you would probably create a new user/schema to keep your topics in, perhaps several so that you can group them for easier administration. You can create topics with Java or PL/SQL. For this example, we will use PL/SQL.
Here’s the commands to create and start our new topic, ASH_TOPIC:
If you are using 21c, instead of create_sharded_queue, you should use create_transactional_event_queue – that procedure was renamed in 21c.
You can put these commands into the worksheet at the top (1), then click on the “Run Statement” button (2). You will see the result in the “Script Output” window (3) as shown below:
If you want to check, you can run this query to see details of the queues and topics in your schema:
select * from user_queues;
Now, we need to go to our PHXPROD1 database and create the PHX_TOPIC there. Just repeat what you just did for ASHPROD1 on the PHXPROD1 database and remember to change the name of the topic in the commands that you run!
Create the Database Link
Great, our topics are ready to go! Next, we need to create a Database Link from the ASHPROD1 database to the PHXPROD1 database. The Database Link will allow us to perform actions against the remote database, in this case, to enqueue messages on the remote topic.
Since our databases are using TLS, we need to make the remote database (PHXPROD1) wallet available to the ASHPROD1 database, so that it can authenticate. The easiest way to do this is to upload the files we need into an Object Store bucket.
Let’s create the bucket. In the OCI Console, make sure you are in the Ashburn region and then click on the “hamburger” menu (the three lines at the top left), then “Storage” and the “Buckets”:
Then click on the “Create Bucket” button. Give your bucket a name, I used dblinks and click on the “Create” button. All the defaults are fine for what we need:
Notice that your bucket is private:
Click on the “Upload” button to upload a file:
Then click on the “select files” link to choose the file. We need the file called cwallet.sso in the wallet we downloaded for the PHXPROD1 database (the remote database):
Once the upload completes you can close that dialog and then click on the “three dots” (1) next to the file we just uploaded and choose the “Create Pre-Authenticated Request” (2) option:
The defaults are what we want here – we want to be able to read this one object only. If you want to change the expiration to something like 2 days, just to be on the safe side, that’s not a bad idea at all! Click on the “Create Pre-Authenticated Request” button:
Make sure you take a copy of the URL, you won’t be able to get it again!
Ok, now we are ready to create the link. Open the SQL Worksheet for the ASHPROD1 database (the local/source database) and run these commands. You will need to get the right values for several fields before you run this, I’ll tell you where to get them next:
create or replace directory AQ_DBLINK_CREDENTIALS
as 'aq_dblink_credentials';
BEGIN
DBMS_CLOUD.GET_OBJECT(
object_uri => 'https://objectstorage.us-ashburn-1.oraclecloud.com/p/xxxx/n/xxxx/b/dblinks/o/cwallet.sso',
directory_name => 'AQ_DBLINK_CREDENTIALS',
file_name => 'cwallet.sso');
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'CRED',
username => 'ADMIN', -- remote db has case-sensitive login enabled, must be uppercase
password => 'Welcome123##');
DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK(
db_link_name => 'PHXPROD1',
hostname => 'adb.us-phoenix-1.oraclecloud.com',
port => '1522',
service_name => 'xxxxx.adb.oraclecloud.com',
ssl_server_cert_dn => 'CN=adwc.uscom-east-1.oraclecloud.com, OU=Oracle BMCS US, O=Oracle Corporation, L=Redwood City, ST=California, C=US',
credential_name => 'CRED',
directory_name => 'AQ_DBLINK_CREDENTIALS');
END;
In the GET_OBJECT call, the object_uri needs to be that URL that you just copied from the Pre-Authenticated Request.
In the CREATE_CREDENTIAL call, the username should be the user for the remote (PHXPROD1) database – we can just use ADMIN. Note that this must be in upper case since Autonomous Database is configured for case-sensitive login by default. The password should be the password for that user.
In the CREATE_DATABASE_LINK call, the db_link_name is what we are going to use to refer to the remote database. I just used the name of the database – you’ll see later why that makes things more intuitive. You can get the values for the hostname, port, service_name and ssl_server_cert_dn fields from the wallet you downloaded. Make sure you use the wallet for the PHXPROD1 database. You will find the right values in the tnsnames.ora file, and you can just copy them in here. Here’s an example, I’ve bolded the values we need:
Once you have all the right values, paste this into the SQL Worksheet and click on the “Run Script” button:
You can check it worked by doing a query through the database link. For example, let’s get a list of the queues/topics on the remote database. We are entering this query on the ASHPROD1 instance, using the database link (“@PHXPROD1“) to have it run on the other database, notice that the output shows the topic PHX_TOPIC we created in the PHXPROD1 database:
Start message propagation
Ok, now we are ready to start propagating messages! (Yay!)
We want to run these commands in the SQL Worksheet on the ASHPROD1 database (the source/local database):
We create a TopicProducer, and we are sending a simple JMS Text Message to the topic.
Let’s run our producer now:
export DB_PASSWORD=Welcome123## <-- use your real password!
mvn clean compile exec:exec -P producer
When that finishes (you’ll see a “BUILD SECCESS” message) go and have a look at your consumer, you should see something like this:
[INFO] --- exec-maven-plugin:3.0.0:exec (default-cli) @ propagation ---
Waiting for messages...
hello from ashburn, virginia!
Yay! It worked! We just published a message on the ASH_TOPIC in the ASHPROD1 instance and it was propagated to PHXPROD1 for us and our consumer read it off the PHX_TOPIC in PHXPROD1.
Here’s an interesting query we can run to see what happened:
Notice the consumer names – in the local ASHPROD1 instance, the consumer is AQ$_P_106126_92PHXPROD1 (yours will probably be slightly different.) That’s the propagation consumer that is running to propagate the messages to PHXPROD1.
But in the PHXPROD1 instance, the consumer is BOOK! That’s the name we gave to our consumer:
Go ahead and send some more messages with the producer! Enjoy!
]]>https://redstack.dev/2022/05/24/cross-region-event-propagation-with-oracle-transactional-event-queues/feed/1markxnelsonInstalling Oracle REST Data Services (standalone)
https://redstack.dev/2022/05/17/installing-oracle-rest-data-services/
https://redstack.dev/2022/05/17/installing-oracle-rest-data-services/#respond<![CDATA[Mark Nelson]]>Tue, 17 May 2022 16:27:22 +0000<![CDATA[Uncategorized]]><![CDATA[ORDS]]>http://redstack.wordpress.com/?p=3466<![CDATA[I have been using Oracle REST Data Services (you might know it as “Database Actions”) with my Oracle Autonomous Database for a while now, and I wanted to play with some new features, which led me to want to install … Continue reading →]]><![CDATA[
I have been using Oracle REST Data Services (you might know it as “Database Actions”) with my Oracle Autonomous Database for a while now, and I wanted to play with some new features, which led me to want to install my own (“customer managed” or “standalone”) ORDS instance. It took me a few goes, and some help from Jeff Smith (yes, that Jeff Smith) to get it right, so I thought it would be good to document how I got it working!
In this example, I am going to use an Oracle 21c database, and I will set up ORDS 22.1 in one of the pluggable databases. Once we have it up and running, we will use Database Actions and look at some of the services in the REST service catalog.
Setting up a database
First, of course I needed a database to play with. I fired up a 21c database in a container for this exercise. You will need to go accept the license agreement before you can pull the container image.
Go to https://container-registry.oracle.com and navigate to “Database” and then “enterprise” and click on the button to accept the agreement. You may have to log in if you are not already.
You will also need to log in to Oracle Container Registry with your container runtime, in this post I am using Docker. This will prompt for your username and password:
docker login container-registry.oracle.com
Now we can start up a database in a container. Here is the command I used, I set a password and the SID/service names, and make sure to expose the database port so we can access the database from outside the container:
Note: If you use different names, you will need to adjust the example commands appropriately! Also, if you want to be able to restart this database without losing all your data, you’ll want to mount a volume – the OCR page has details on how to do that.
It takes a few minutes to start up. You can watch the logs using this command, and you will need to wait until you see the message indicating it is ready to use:
docker logs -f oracle-db
When the database is ready, we can log on and give the necessary privileges to our PDB admin user.
sqlplus sys/Welcome123##@//localhost:1521/orcl as sysdba
SQL> alter session set container = PDB1;
SQL> grant dba to pdbadmin;
Ok, now we are ready to install ORDS!
Installing ORDS
First step is to download it, of course. Here is the site to get the latest version of ORDS:
Once you have it downloaded, just unzip it into a new directory. I unzipped it into /home/mark/ords.
The steps that I am going to describe here are described in more detail in the documentation.
Now we want to run the pre-install script to set up the necessary privileges. I am using the pdbadmin user, the admin user in my PDB. This script will take just a few moments to run:
sqlplus sys/Welcome123##@//localhost:1521/pdb1 as sysdba \
@scripts/installer/ords_installer_privileges.sql pdbadmin
Great, now we can run the installer. I used the interactive installer, which will ask you for the necessary information and let you type it in as you go. It is also possible to do a “silent” install by providing all of the information on the command line – the documentation explains how to do this.
Create a directory to hold the configuration and start the interactive installer:
cd /home/mark/ords
export PATH=/home/mark/ords/bin:$PATH
mkdir config
ords --config /home/mark/ords/config install
Here’s what the interactive install dialog looks like, I highlighted the data I entered in bold, mostly I just took the defaults:
Oracle REST Data Services - Interactive Install
Enter a number to select the type of installation
[1] Install or upgrade ORDS in the database only
[2] Create or update a database pool and install/upgrade ORDS in the database
[3] Create or update a database pool only
Choose [2]:
Enter a number to select the database connection type to use
[1] Basic (host name, port, service name)
[2] TNS (TNS alias, TNS directory)
[3] Custom database URL
Choose [1]:
Enter the database host name [localhost]:
Enter the database listen port [1521]:
Enter the database service name [orcl]: pdb1
Provide database user name with administrator privileges.
Enter the administrator username: pdbadmin
Enter the database password for pdbadmin:
Connecting to database user: pdbadmin url: jdbc:oracle:thin:@//localhost:1521/pdb1
Retrieving information.
Enter the default tablespace for ORDS_METADATA and ORDS_PUBLIC_USER [SYSAUX]:
Enter the temporary tablespace for ORDS_METADATA and ORDS_PUBLIC_USER [TEMP]:
Enter a number to select additional feature(s) to enable:
[1] Database Actions (Enables all features)
[2] REST Enabled SQL and Database API
[3] REST Enabled SQL
[4] Database API
[5] None
Choose [1]:
Enter a number to configure and start ORDS in standalone mode
[1] Configure and start ORDS in standalone mode
[2] Skip
Choose [1]:
Enter a number to use HTTP or HTTPS protocol
[1] HTTP
[2] HTTPS
Choose [1]:
Enter the HTTP port [8080]:
Note: I just used HTTP, but if you want to use HTTPS, you will probably want to create some certificates and configure them in the installer. Here’s some commands to create a self-signed certificate and convert the key to the DER format ORDS requires:
# these are optional - only required if you want to use HTTPS
openssl req -new -x509 -sha256 -newkey rsa:2048 -nodes -keyout ords.key.pem \
-days 365 -out ords.pem
openssl x509 -in ords.pem -text -noout
openssl pkcs8 -topk8 -inform PEM -outform DER -in ords.pem -out ords.der -nocrypt
openssl pkcs8 -topk8 -inform PEM -outform DER -in ords.key.pem -out ords.der -nocrypt
Once you complete the interview, the installer will perform the installation. It takes just a couple of minutes, and it will start up the ORDS standalone server for you. If you need to stop it (with Ctrl-C) you can restart it with this command:
ords --config /home/mark/ords/config serve
Ok, now we have ORDS up and running, we are going to need a user!
Preparing an ORDS user
Let’s create a regular database user and give them access to ORDS.
Using the PDB admin user, we can create a new user and give them the necessary permissions to use ORDS:
sqlplus pdbadmin/Welcome123##@//localhost:1521/pdb1
SQL> create user mark identified by Welcome123##;
SQL> grant connect, resource to mark;
SQL> grant unlimited tablespace to mark;
SQL> begin
ords.enable_schema(
p_enabled => true,
p_schema => 'mark',
p_url_mapping_type => 'BASE_PATH',
p_url_mapping_pattern => 'mark',
p_auto_rest_auth => false
);
commit;
end;
/
You may need to change the hostname or port if you used something different.
You should see the login page:
Enter your username – I used mark, and the press Next, and enter the password, I used Welcome123##. This will take you to the main “Database Actions” page:
Let’s create a table and enter some data
Click on the SQL card (top left) to open the SQL worksheet, and enter these statements:
create table city (
name varchar2(64),
population number
);
insert into city values ('Tokyo', 37000000);
insert into city values ('Dehli', 29000000);
insert into city values ('Shanghai', 26000000);
insert into city values ('Sao Paulo', 21000000);
insert into city values ('Values', 21000000);
Click on the “Run Script” icon to execute these statements – its the one the red arrow is highlighting:
Let’s expose that table as a REST service!
Creating a REST service
ORDS allows us to easily expose an SQL statement, or a PL/SQL block as a REST service. Let’s navigate to the REST page – click on the “hamburger menu” (1) and then the REST page (2):
The basic structure is that we create a “module” which contains “templates” which in turn contain “handlers.” So let’s start by creating a module. Click on the “Modules” option in the top menu.
Then click on the “Create Module” button in the top right corner:
Give the module a name, I used mark and a base path, I used /api/. Since we are just playing here, set the “Protected by Privilege” to Not protected. Obviously, in real life, you’d set up authentication, for example using OAuth, which ORDS provides out of the box – but’s that another post Finally, click on the “Create” button to create the module. It will now appear in the modules page:
Click on the module name (“mark” in the example above) to open the module, and click on the “Create Template” button on the right hand side:
Enter a URI Template for this service, I used cities for mine, then click on the “Create” button:
Now you will see the template page. Click on the “Create Handler” button on the right:
In the “Create Handler” dialog, we provide details of the service. Notice that you can choose the HTTP Method (GET, POST, DELETE, etc.) and you can control paging. For this service, we want to create a GET handler and we want the “Source Type” to be Collection Query which lets us enter an SQL statement. While you are here, have a look in the pull down and notice that you can also use PL/SQL! You can also use bind variables in here, so we can accept parameters and use them in the query or PL/SQL code.
For now, enter this simple query, then click on the “Create” button:
select * from city
Note: You should not include the semi-colon!
Once you have created the handler, you will see the details view, where you can test it by clicking on the “Run” button (1):
Notice that you see the results in the bottom half of the screen. You also have the URL for the service provided (2) and there is a “copy” button on the right hand side. Let’s test our service using cURL:
ORDS also provides a heap of out of the box services for us automatically. To explore these, let’s use Postman, which is a very popular tool for REST testing. You can download it from Postman.
Jeff Smith has a great post here that explains how to import all the ORDS REST APIs into Postman.
When you open Postman, click on the “Import” button:
Now you need the right URL! If you have been following along and using the same names as me your URL will be:
If you used a different user, you will need to change “mark” to your username in that URL. After you click on “Import” chose “Link” as the type and enter your URL:
One good thing about Postman is that we can set the authentication parameters at the top level, on that “ORDS Database API” folder that you just created. Open that and click on the “Authorization” tab, choose “Basic Auth” as the type and enter the database user and password:
In this folder you will see a whole collection of services for all kinds of things. Let’s try a simple one! Navigate to the “Get Database version” service and click on the “Send” button in the top right corner. You’ll see the result data in the bottom pane:
Well, there you go! We installed ORDS, used the Database Actions and REST interfaces, created a service and explored the out of the box services! I hope you enjoyed!
]]>https://redstack.dev/2022/05/17/installing-oracle-rest-data-services/feed/0markxnelsonThe OCI Service Mesh is now available!
https://redstack.dev/2022/04/27/the-oci-service-mesh-is-now-available/
https://redstack.dev/2022/04/27/the-oci-service-mesh-is-now-available/#respond<![CDATA[Mark Nelson]]>Wed, 27 Apr 2022 19:41:28 +0000<![CDATA[Uncategorized]]><![CDATA[OCI]]><![CDATA[Service Mesh]]>http://redstack.wordpress.com/?p=3457<![CDATA[Dusko Vukmanovic just announced the general availability of OCI Service Mesh in this blog post. It provides security, observability, and network traffic management for cloud native applications without requiring any changes to the applications. Its a free managed service and its available in all … Continue reading →]]><![CDATA[
It provides security, observability, and network traffic management for cloud native applications without requiring any changes to the applications.
Its a free managed service and its available in all commercial regions today. Check it out!
]]>https://redstack.dev/2022/04/27/the-oci-service-mesh-is-now-available/feed/0markxnelsonPlaying with Kafka Java Client for TEQ – creating the simplest of producers and consumers
https://redstack.dev/2022/04/26/playing-with-okafka-creating-the-simplest-of-producers-and-consumers/
https://redstack.dev/2022/04/26/playing-with-okafka-creating-the-simplest-of-producers-and-consumers/#comments<![CDATA[Mark Nelson]]>Tue, 26 Apr 2022 19:33:08 +0000<![CDATA[Uncategorized]]><![CDATA[ADB]]><![CDATA[kafka]]><![CDATA[okafka]]>http://redstack.wordpress.com/?p=3418<![CDATA[Today I was playing with Kafka Java Client for TEQ, that allows you to use Oracle Transactional Event Queues (formerly known as Sharded Queues) in the Oracle Database just like Kafka. Kafka Java Client for TEQ is available as a … Continue reading →]]><![CDATA[
Today I was playing with Kafka Java Client for TEQ, that allows you to use Oracle Transactional Event Queues (formerly known as Sharded Queues) in the Oracle Database just like Kafka.
In this preview version, there are some limitations documented in the repository, but the main one to be aware of is that you need to use the okafka library, not the regular kafka one, so you would need to change existing kafka client code if you wanted to try out the preview.
Preparing the database
To get started, I grabbed a new Oracle Autonomous Database instance on Oracle Cloud, and I opened up the SQL Worksheet in Database Actions and created myself a user. As the ADMIN user, I ran the following commands:
create user mark identified by SomePassword; -- that's not the real password!
grant connect, resource to mark;
grant create session to mark;
grant unlimited tablespace to mark;
grant execute on dbms_aqadm to mark;
grant execute on dbms_aqin to mark;
grant execute on dbms_aqjms to mark;
grant select_catalog_role to mark;
grant select on gv$instance to mark;
grant select on gv$listener_network to mark;
commit;
And of course, I needed a topic to work with, so I logged on to SQL Worksheet as my new MARK user and created a topic called topic1 with these commands:
Note that this is for Oracle Database 19c. If you are using 21c, create_sharded_queue is renamed to create_transactional_event_queue, so you will have to update that line.
The topic is empty right now, since we just created it, but here are a couple of queries that will be useful later. We can see the messages in the topic, with details including the enqueue time, status, etc., using this query:
select * from topic1;
This is a useful query to see a count of messages in each status:
select msg_state, count(*)
from aq$topic1
group by msg_state;
Building the OKafka library
We need to build the OKafka library and install it in our local Maven repository so that it will be available to use as a dependency since the preview is not currently available in Maven Central.
First, clone the repository:
git clone https://github.com/oracle/okafka
Now we can build the uberjar with the included Gradle wrapper:
cd okafka
./gradlew fullJar
This will put the JAR file in gradle/build/libs and we can install this into our local Maven repository using this command:
I am using Java 17 for this example. But you could use anything from 1.8 onwards, just update the version in the properties if you are using an earlier version.
Let’s walk through this code and talk about what it does.
First, let’s notice the imports. We are importing the OKafka versions of the familiar Kafka classes. These have the same interfaces as the standard Kafka ones, but they work with Oracle TEQ instead:
In the main() method we first set the log level and then we load some properties from our producer.properties config file. You will see the getProperties() method at the end of the file is a fairly standard, it is just reading the file and returning the contents as a new Properties object.
Let’s see what’s in that producer.properties file, which is located in the src/main/resources directory:
There are two groups of properties in there. The first group provide details about my Oracle Autonomous Database instance, including the location of the wallet file – we’ll get that and set it up in a moment.
The second group are the normal Kafka properties that you might expect to see, assuming you are familiar with Kafka. Notice that the bootstrap.servers lists the address of my Oracle Autonomous Database, not a Kafka broker! Also notice that we are using the serializers (and later, deserializers) provided in the OKafka library, not the standard Kafka ones.
Next, we set the topic name by reading it from the properties file. If it is not there, the second argument provides a default/fallback value:
And now we are ready to create the producer and send some messages:
try(KafkaProducer<String, String> producer = new KafkaProducer<>(props)) {
for (int i = 0; i < 100; i++) {
producer.send(new ProducerRecord<String, String>(
topicName, 0, "key", "value " + i));
}
System.out.println("sent 100 messages");
} catch (Exception e) {
e.printStackTrace();
}
We created the KafkaProducer and for this example, we are using String for both the key and the value.
We have a loop to send 100 messages, which we create with the ProducerRecord class. We are just setting them to some placeholder data.
Ok, that’s all we need in the code. But we will need to get the wallet and set it up so Java programs can use it to authenticate. Have a look at this post for details on how to do that! You just need to download the wallet from the OCI console, unzip it into a directory called wallet – put that in the same directory as the pom.xml, and then edit the sqlnet.ora to set the DIRECTORY to the right location, e.g. /home/mark/src/okafka/wallet for me, and then add your credentials using the setup_wallet.sh I showed in that post.
Finally, you need to add these lines to the ojdbc.properties file in the wallet directory to tell OKafka the user to connect to the database with:
You can see it dumps out the properties, and then after some informational messages you see the “sent 100 messages” output. Now you might want to go and run that query to look at the messages in the database!
Now, lets move on to creating a consumer, so we can read those messages back.
Creating the Consumer
The consumer is going to look very similar to the producer, and it will also have its own properties file. Here’s the contents of the properties file first – put this in src/main/resources/consumer.properties:
A lot of this is the same as the producer, so let’s walk through the parts that are different.
First, we load a the different properties file, the consumer one, it has a few different properties that are relevant for consumers. In particular, we are setting the max.poll.records to 100 – so we’ll only be reading at most 100 messages off the topic at a time.
Here’s how we create the consumer:
KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
consumer.subscribe(Arrays.asList(topicName));
Again, you may notice that this is very similar to Kafka. We are using String as the type for both the key and value. Notice we provided the appropriate deserializers in the property file, the ones from the OKafka library, not the standard Kafka ones.
We open our consumer and poll for messages (for 30 seconds) and then we just print out some information about each message, and then close out consumer! Again, this is very simple, but its enough to test consuming messages.
We can run this and we should see all of the message data in the output, here’s how to run it, and an excerpt of the output:
So there you go! We successfully created a very simple producer and consumer and we sent and received messages from a topic using the OKafka library and Oracle Transactional Event Queues!
]]>https://redstack.dev/2022/04/26/playing-with-okafka-creating-the-simplest-of-producers-and-consumers/feed/1markxnelsonLoading data into Autonomous Data Warehouse using Datapump
https://redstack.dev/2022/04/12/loading-data-into-autonomous-data-warehouse-using-datapump/
https://redstack.dev/2022/04/12/loading-data-into-autonomous-data-warehouse-using-datapump/#respond<![CDATA[Mark Nelson]]>Tue, 12 Apr 2022 19:47:51 +0000<![CDATA[Uncategorized]]>http://redstack.wordpress.com/?p=3415<![CDATA[Today I needed to load some data in my Oracle Autonomous Database running on Oracle Cloud (OCI). I found this great article that explained just what I needed! Thanks to Ankur Saini for sharing!]]><![CDATA[
Today I needed to load some data in my Oracle Autonomous Database running on Oracle Cloud (OCI). I found this great article that explained just what I needed!
]]>https://redstack.dev/2022/04/12/loading-data-into-autonomous-data-warehouse-using-datapump/feed/0markxnelsonConfiguring a Java application to connect to Autonomous Database using Mutual TLS
https://redstack.dev/2022/04/11/configuring-a-java-application-to-connect-to-autonomous-database-using-mutual-tls/
https://redstack.dev/2022/04/11/configuring-a-java-application-to-connect-to-autonomous-database-using-mutual-tls/#comments<![CDATA[Mark Nelson]]>Mon, 11 Apr 2022 15:21:41 +0000<![CDATA[Uncategorized]]><![CDATA[ADB]]><![CDATA[Java]]><![CDATA[MTLS]]>http://redstack.wordpress.com/?p=3388<![CDATA[In this post, I am going to explain how to configure a standalone Java (SE) application to connect to an Oracle Autonomous Database instance running in Oracle Cloud using Mutual TLS. The first thing you are going to need is … Continue reading →]]><![CDATA[
In this post, I am going to explain how to configure a standalone Java (SE) application to connect to an Oracle Autonomous Database instance running in Oracle Cloud using Mutual TLS.
The first thing you are going to need is an Oracle Autonomous Database instance. If you are reading this post, you probably already know how to get one. But just in case you don’t – here’s a good reference to get you started – and remember, this is available in the “always free” tier, so you can try this out for free!
When you look at your instance in the Oracle Cloud (OCI) console, you will see there is a button labelled DB Connection – go ahead and click on that:
Viewing the Autonomous Database instance in the Oracle Cloud Console.
In the slide out details page, there is a button labelled Download wallet – click on that and save the file somewhere convenient.
Downloading the wallet.
When you unzip the wallet file, you will see it contains a number of files, as shown below, including a tnsnames.ora and sqlnet.ora to tell your client how to access the database server, as well as some wallet files that contain certificates to authenticate to the database:
The first thing you need to do is edit the sqlnet.ora file and make sure the DIRECTORY entry matches the location where you unzipped the wallet, and then add the SSL_SERVER_DN_MATCH=yes option to the file, it should look something like this:
Before we set up Mutual TLS – let’s review how we can use this wallet as-is to connect to the database using a username and password. Let’s take a look at a simple Java application that we can use to validate connectivity – you can grab the source code from GitHub:
This repository contains a very simple, single class Java application that just connects to the database, checks that the connection was successful and then exits. It includes a Maven POM file to get the dependencies and to run the application.
Make sure you can compile the application successfully:
$ cd adb-mtls-sample
$ mvn clean compile
Before you run the sample, you will need to edit the Java class file to set the database JDBC URL and user to match your own environment. Notice these lines in the file src/main/java/com/github/markxnelson/SimpleJDBCTest.java:
// set the database JDBC URL - note that the alias ("myquickstart_high" in this example) and
// the location of the wallet must be changed to match your own environment
private static String url = "jdbc:oracle:thin:@myquickstart_high?TNS_ADMIN=/home/mark/blog";
// the username to connect to the database with
private static String username = "admin";
You need to update these with the correct alias name for your database (it is defined in the tnsnames.ora file in the wallet you downloaded) and the location of the wallet, i.e. the directory where you unzipped the wallet, the same directory where the tnsnames.ora is located.
You also need to set the correct username that the sample should use to connect to your database. Note that the user must exist and have at least the connect privilege in the database.
Once you have made these updates, you can compile and run the sample. Note that this code expects you to provide the password for that use in an environment variable called DB_PASSWORD:
Great! We can connect to the database normally, using a username and password. If you want to be sure, try commenting out the two lines that set the user and password on the data source and run this again – the connection will fail and you will get an error!
Now let’s configure it to use mutual TLS instead.
I included a script called setup_wallet.sh in the sample repository. If you prefer, you can just run that script and provide the username and passwords when asked. If you want to do it manually, then read on!
First, we need to configure the Java class path to include the Oracle Wallet JAR files. Maven will have downloaded these from Maven Central for you when you compiled the application above, so you can find them in your local Maven repository:
First, set the environment variable USER_DEFINED_WALLET to the directory where you unzipped the wallet, i.e. the directory where the tnsnames.ora is located.
export USER_DEFINED_WALLET=/home/mark/blog
You’ll also want the change the alias in this command to match your database alias. In the example above it is myquickstart_high. You get this value from your tnsnames.ora – its the same one you used in the Java code earlier.
Now we are ready to run the command. This will update the wallet to add your user’s credentials and associate them with that database alias.
Once we have done that, we can edit the Java source code to comment out (or remove) the two lines that set the user and password:
Now you can compile and run the program again, and this time it will get the credentials from the wallet and will use mutual TLS to connect to the database.
There you have it! We can now use this wallet to allow Java applications to connect to our database securely. This example we used was pretty simple, but you could imagine perhaps putting this wallet into a Kubernetes secret and mounting that secret as a volume for a pod running a Java microservice. This provides separation of the code from the credentials and certificates needed to connect to and validate the database, and helps us to build more secure microservices. Enjoy!
]]>https://redstack.dev/2022/04/11/configuring-a-java-application-to-connect-to-autonomous-database-using-mutual-tls/feed/1markxnelsonCan Java microservices be as fast as Go?
https://redstack.dev/2020/11/18/can-java-microservices-be-as-fast-as-go/
https://redstack.dev/2020/11/18/can-java-microservices-be-as-fast-as-go/#respond<![CDATA[Mark Nelson]]>Wed, 18 Nov 2020 15:30:14 +0000<![CDATA[Uncategorized]]>http://redstack.wordpress.com/?p=3378<![CDATA[I recently did a talk with Peter Nagy where we compared Java and Go microservices performance. We published a write up in the Helidon blog over at Medium.]]><![CDATA[
I recently did a talk with Peter Nagy where we compared Java and Go microservices performance. We published a write up in the Helidon blog over at Medium.
]]>https://redstack.dev/2020/11/18/can-java-microservices-be-as-fast-as-go/feed/0markxnelsonStoring ATP Wallets in a Kubernetes Secret
https://redstack.dev/2020/11/18/storing-atp-wallets-in-a-kubernetes-secret/
https://redstack.dev/2020/11/18/storing-atp-wallets-in-a-kubernetes-secret/#respond<![CDATA[Mark Nelson]]>Wed, 18 Nov 2020 14:35:58 +0000<![CDATA[Uncategorized]]>http://redstack.wordpress.com/?p=3366<![CDATA[In this previous post, we talked about how to create a WebLogic datasource for an ATP database. In that example we put the ATP wallet into the domain directly, which is fine if your domain is on a secure environment, but … Continue reading →]]><![CDATA[
In this previous post, we talked about how to create a WebLogic datasource for an ATP database. In that example we put the ATP wallet into the domain directly, which is fine if your domain is on a secure environment, but if we want to use ATP from a WebLogic domain running in Kubernetes, you might not want to burn the wallet into the Docker image. Doing so would enable anyone with access to the Docker image to retrieve the wallet.
A more reasonable thing to do in the Kubernetes environment would be to put the ATP wallet into a Kubernetes secret and mount that secret into the container.
You will, of course need to decide where you are going to mount it and update the sqlnet.ora with the right path, like we did in the previous post. Once that is taken care of, you can create the secret from the wallet using a small script like this:
We need to base64 encode the data that we put into the secret. When you mount the secret on a container (in a pod), Kubernetes will decode it, so it appears to the container in its original form.
Here is an example of how to mount the secret in a container:
You will obviously still need to control access to the secret and the running containers, but overall this approach does help to provide a better security stance.
]]>https://redstack.dev/2020/11/18/storing-atp-wallets-in-a-kubernetes-secret/feed/0markxnelsonConfiguring a WebLogic Data Source to use ATP
https://redstack.dev/2020/11/18/configuring-a-weblogic-data-source-to-use-atp/
https://redstack.dev/2020/11/18/configuring-a-weblogic-data-source-to-use-atp/#comments<![CDATA[Mark Nelson]]>Wed, 18 Nov 2020 14:34:46 +0000<![CDATA[Uncategorized]]>http://redstack.wordpress.com/?p=3364<![CDATA[In this post I am going to share details about how to configure a WebLogic data source to use ATP. If you are not familiar with ATP, it is the new Autonomous Transaction Processing service on Oracle Cloud. It provides a fully … Continue reading →]]><![CDATA[
In this post I am going to share details about how to configure a WebLogic data source to use ATP.
If you are not familiar with ATP, it is the new Autonomous Transaction Processing service on Oracle Cloud. It provides a fully managed autonomous database. You can create a new database in the OCI console in the Database menu under “Autonomous Transaction Processing” by clicking on that big blue button:
You need to give it a name, choose the number of cores and set an admin password:
It will take a few minutes to provision the database. Once it is ready, click on the database to view details.
Then click on the “DB Connection” button to download the wallet that we will need to connect to the database.
You need to provide a password for the wallet, and then you can download it:
Copy the wallet to your WebLogic server and unzip it. You will see the following files:
[oracle@domain1-admin-server atp]$ ls -l
total 40
-rw-rw-r--. 1 oracle oracle 6661 Feb 4 17:40 cwallet.sso
-rw-rw-r--. 1 oracle oracle 6616 Feb 4 17:40 ewallet.p12
-rw-rw-r--. 1 oracle oracle 3241 Feb 4 17:40 keystore.jks
-rw-rw-r--. 1 oracle oracle 87 Feb 4 17:40 ojdbc.properties
-rw-rw-r--. 1 oracle oracle 114 Feb 4 17:40 sqlnet.ora
-rw-rw-r--. 1 oracle oracle 6409 Feb 4 17:40 tnsnames.ora
-rw-rw-r--. 1 oracle oracle 3336 Feb 4 17:40 truststore.jks
I put these in a directory called /shared/atp. You need to update the sqlnet.ora to have the correct location as shown below:
You can now log in to the WebLogic console and create a data source, give it a name on the first page:
You can take the defaults on the second page:
And the third:
On the next page, you need set the database name, hostname and port to the values from the tnsnames.ora:
On the next page you can provide the username and password. In this example I am just using the admin user. In a real life scenario you would probably go and create a “normal” user and use that. You can find details about how to set up SQLPLUS here.
You also need to set up a set of properties that are required for ATP as shown below, you can find more details in the ATP documentation:
Also notice the the URL format is jdbc:oracle:thin:@cafedatabase_high, you just need to put the name in there from the tnsnames.ora file:
On the next page you can target the data source to the appropriate servers, and we are done! Click on the “Finish” button and then you can activate changes if you are in production mode.
You can now go and test the data source (in the “Monitoring” tab and then “Testing”, select the data source and click on the “Test Data Source” button.
You will see the success message:
Enjoy!
]]>https://redstack.dev/2020/11/18/configuring-a-weblogic-data-source-to-use-atp/feed/1markxnelsonNew Steps Store launched in Wercker!
https://redstack.dev/2018/04/05/new-steps-store-launched-in-wercker/
https://redstack.dev/2018/04/05/new-steps-store-launched-in-wercker/#respond<![CDATA[Mark Nelson]]>Thu, 05 Apr 2018 12:16:53 +0000<![CDATA[Uncategorized]]><![CDATA[CI/CD]]><![CDATA[steps]]><![CDATA[wercker]]>http://redstack.wordpress.com/?p=3358<![CDATA[Wercker’s new Steps Store just went live and you can read all about it here: http://blog.wercker.com/steps-launch-of-new-steps-store In case you don’t know – Wercker is Oracle’s cloud-based (SaaS) CI/CD platform, which you can use for free at http://www.wercker.com. Steps are reusable … Continue reading →]]><![CDATA[
Wercker’s new Steps Store just went live and you can read all about it here:
In case you don’t know – Wercker is Oracle’s cloud-based (SaaS) CI/CD platform, which you can use for free at http://www.wercker.com. Steps are reusable parts that can be used in continuous delivery pipelines. They are almost all open source and free to use too. We also have a non-free tier which we call “Oracle Container Pipelines” which gives you dedicated resources to run your pipelines.
]]>https://redstack.dev/2018/04/05/new-steps-store-launched-in-wercker/feed/0markxnelsonOracle releases the open source Oracle WebLogic Server Kubernetes Operator
https://redstack.dev/2018/02/06/oracle-releases-the-open-source-oracle-weblogic-server-kubernetes-operator/
https://redstack.dev/2018/02/06/oracle-releases-the-open-source-oracle-weblogic-server-kubernetes-operator/#respond<![CDATA[Mark Nelson]]>Tue, 06 Feb 2018 21:57:54 +0000<![CDATA[Uncategorized]]>http://redstack.wordpress.com/?p=3356<![CDATA[I am very happy to be able to announce that we have just released and open sourced the Oracle WebLogic Server Kubernetes Operator, which I have been working on with a great team of people for the last few months! … Continue reading →]]><![CDATA[
I am very happy to be able to announce that we have just released and open sourced the Oracle WebLogic Server Kubernetes Operator, which I have been working on with a great team of people for the last few months!
You can find the official announcement on the WebLogic Server blog and the code is on GitHub at https://github.com/oracle/weblogic-kubernetes-operator. This initial release is a “Technology Preview” which we really hope people will be interested in playing with and giving feedback. We have already had some great feedback from our small group of testers who have been playing with it for the last couple of weeks, and we are very, very appreciative for their input. We have some great plans for the operator going forward.
]]>https://redstack.dev/2018/02/06/oracle-releases-the-open-source-oracle-weblogic-server-kubernetes-operator/feed/0markxnelsonOracle releases certification for WebLogic Server on Kubernetes
https://redstack.dev/2018/01/16/oracle-releases-certification-for-weblogic-server-on-kubernetes/
https://redstack.dev/2018/01/16/oracle-releases-certification-for-weblogic-server-on-kubernetes/#respond<![CDATA[Mark Nelson]]>Wed, 17 Jan 2018 00:08:06 +0000<![CDATA[Uncategorized]]>http://redstack.wordpress.com/?p=3354<![CDATA[In case you missed it, Oracle has certified WebLogic Server on Kubernetes. You can read all the details here: https://blogs.oracle.com/weblogicserver/weblogic-server-certification-on-kubernetes]]><![CDATA[
In case you missed it, Oracle has certified WebLogic Server on Kubernetes. You can read all the details here:
]]>https://redstack.dev/2018/01/16/oracle-releases-certification-for-weblogic-server-on-kubernetes/feed/0markxnelsonJava EE is moving to the Eclipse Foundation
https://redstack.dev/2017/09/25/java-ee-is-moving-to-the-eclipse-foundation/
https://redstack.dev/2017/09/25/java-ee-is-moving-to-the-eclipse-foundation/#respond<![CDATA[Mark Nelson]]>Tue, 26 Sep 2017 01:01:13 +0000<![CDATA[Uncategorized]]>http://redstack.wordpress.com/?p=3350<![CDATA[I’m sure many of you have already heard the news, but in case you missed it, you might want to read all about it here!]]><![CDATA[
I’m sure many of you have already heard the news, but in case you missed it, you might want to read all about it here!
]]>https://redstack.dev/2017/09/25/java-ee-is-moving-to-the-eclipse-foundation/feed/0markxnelsonJava SE 9 and Java EE 8 released
https://redstack.dev/2017/09/21/java-se-9-and-java-ee-8-released/
https://redstack.dev/2017/09/21/java-se-9-and-java-ee-8-released/#respond<![CDATA[Mark Nelson]]>Thu, 21 Sep 2017 22:33:25 +0000<![CDATA[Uncategorized]]>http://redstack.wordpress.com/2017/09/21/java-se-9-and-java-ee-8-released/<![CDATA[“Oracle today announced the general availability of Java SE 9 (JDK 9), Java Platform Enterprise Edition 8 (Java EE 8) and the Java EE 8 Software Development Kit (SDK). “ You can read the Oracle Press Release here: “Oracle Announces … Continue reading →]]><![CDATA[
“Oracle today announced the general availability of Java SE 9 (JDK 9), Java Platform Enterprise Edition 8 (Java EE 8) and the Java EE 8 Software Development Kit (SDK). “
]]>https://redstack.dev/2017/09/21/java-se-9-and-java-ee-8-released/feed/0markxnelsonOracle joins Cloud Native Computing Foundation
https://redstack.dev/2017/09/19/oracle-joins-cloud-native-computing-foundation/
https://redstack.dev/2017/09/19/oracle-joins-cloud-native-computing-foundation/#respond<![CDATA[Mark Nelson]]>Tue, 19 Sep 2017 11:48:13 +0000<![CDATA[Uncategorized]]>http://redstack.wordpress.com/2017/09/19/oracle-joins-cloud-native-computing-foundation/<![CDATA[Read about it over here: https://blogs.oracle.com/developers/oracle-joins-cncf-doubles-down-further-on-kubernetes]]><![CDATA[
]]>https://redstack.dev/2017/09/19/oracle-joins-cloud-native-computing-foundation/feed/0markxnelsonHello Again
https://redstack.dev/2017/09/07/hello-again/
https://redstack.dev/2017/09/07/hello-again/#respond<![CDATA[Mark Nelson]]>Thu, 07 Sep 2017 23:41:46 +0000<![CDATA[Uncategorized]]>http://redstack.wordpress.com/?p=3338<![CDATA[Hello Readers, it has been a long time between drinks, and I apologize for not posting for a while. I have certainly missed you all. I have been working on some internal projects at Oracle for the last couple of … Continue reading →]]><![CDATA[
Hello Readers, it has been a long time between drinks, and I apologize for not posting for a while. I have certainly missed you all. I have been working on some internal projects at Oracle for the last couple of years, and I have not really had much to share. But now I am doing a lot more work that is shareable, and hopefully interesting too, so I am going to start posting again!
I will leave all of the old posts there for prosperity, but the new posts that I am planning will be all about cloudy topics (isn’t everything these days?). I plan to talk about things that I am working with like Docker, Kubernetes, Go, Java 9, Prometheus, ELK, Grafana, Swarm, Compose, Minikube, functions, registries, SRE.. that kind of thing!
I look forward to talking to you all again real soon.
]]>https://redstack.dev/2017/09/07/hello-again/feed/0markxnelsonLearn more about the Chef and Puppet sample code
https://redstack.dev/2016/04/01/learn-more-about-the-chef-and-puppet-sample-code/
https://redstack.dev/2016/04/01/learn-more-about-the-chef-and-puppet-sample-code/#respond<![CDATA[Mark Nelson]]>Fri, 01 Apr 2016 15:38:34 +0000<![CDATA[Uncategorized]]><![CDATA[Chef]]><![CDATA[Puppet]]><![CDATA[WebLogic]]>http://redstack.wordpress.com/?p=3334<![CDATA[For those who are interested in the Chef and Puppet sample code we published on github, please enjoy this video:]]><![CDATA[
For those who are interested in the Chef and Puppet sample code we published on github, please enjoy this video:
]]>https://redstack.dev/2016/04/01/learn-more-about-the-chef-and-puppet-sample-code/feed/0markxnelsonJDBC Drivers in Oracle Maven Repository
https://redstack.dev/2016/03/28/jdbc-drives-in-oracle-maven-repository/
https://redstack.dev/2016/03/28/jdbc-drives-in-oracle-maven-repository/#respond<![CDATA[Mark Nelson]]>Mon, 28 Mar 2016 18:11:15 +0000<![CDATA[Uncategorized]]><![CDATA[Maven]]><![CDATA[Oracle Maven Repository]]>http://redstack.wordpress.com/?p=3327<![CDATA[In case you missed the public announcement made on Stackoverflow, OTN and on JDBC Forum – Oracle is now making JDBC and UCP drivers available in our Maven repository (https://maven.oracle.com). You can learn more in this blog.]]><![CDATA[
]]>https://redstack.dev/2016/03/28/jdbc-drives-in-oracle-maven-repository/feed/0markxnelsonOracle publishes official Chef and Puppet samples on github
https://redstack.dev/2016/03/21/oracle-publishes-official-chef-and-puppet-samples-on-github/
https://redstack.dev/2016/03/21/oracle-publishes-official-chef-and-puppet-samples-on-github/#comments<![CDATA[Mark Nelson]]>Mon, 21 Mar 2016 16:59:42 +0000<![CDATA[Uncategorized]]><![CDATA[11g]]><![CDATA[12c]]><![CDATA[Chef]]><![CDATA[Puppet]]>http://redstack.wordpress.com/?p=3190<![CDATA[With Edwin Biemond Oracle official sample Chef cookbooks and Puppet modules are now available on Oracle’s official GitHub page at https://github.com/oracle/chef-samples and https://github.com/oracle/puppet-samples for Java, WebLogic and Fusion Middleware. Chef and Puppet are arguably the most popular provisioning frameworks and … Continue reading →]]><![CDATA[
Chef and Puppet are arguably the most popular provisioning frameworks and we would like to help you with your WebLogic and Fusion Middleware provisioning by publishing samples for Chef and Puppet. The Chef and Puppet sample modules and cookbooks are almost the same and are using the same development frameworks.
We start this series of posts with an introduction what these cookbooks and modules can do and provide a simple quickstart how to create a WebLogic domain in Chef or Puppet on a Windows, Linux or Solaris host. In the following posts we will install FMW software, patch and extend a domain with FMW software.
These Chef cookbooks and Puppet modules allow you install WebLogic together with Fusion Middleware on any Linux, Solaris or Windows host. The cookbooks should work in Chef 11 & 12 and can be used with Chef server or with Chef Solo. The modules should work on Puppet 3 & 4 and it should not matter if you use Puppet Enterprise, an agent or just use puppet apply.
The first release on github will contain the following cookbooks and modules:
fmw_jdk, installs JDK 7 or 8 and optionally configures the urandom random number generator service for Linux distributions.
fmw_wls, installs WebLogic 11g or 12c and optionally create thes Oracle user and group on solaris and linux.
fmw_bsu, patches a WebLogic 11g middleware home.
fmw_opatch, patches WebLogic 12c or any Fusion Middleware 11g, 12c product.
fmw_inst, installs FMW 11g, 12c product including Oracle SOA Suite, Service Bus, MFT, OIM, ADF Runtime, and WebCenter.
fmw_rcu, creates a 11g, 12c Common or SOA Suite FMW repository on an Oracle Database.
fmw_domain, creates a domain in development mode, configures the node manager, start the admin server and extend the domain with ADF/JRF, Service Bus, SOA Suite and optionally with BAM and Enterprise Scheduler.
A couple of important notes about these cookbooks and modules:
They don’t download any (free or licensed) software from OTN, Oracle support or edelivery, this is the responsibility of the user/customer.
Oracle Support won’t support these cookbooks or modules but you can raise issues or send us pull requests on github.
Binaries should be locally accessible in a directory or a network share.
They will create a domain only in development mode.
They won’t do multi-node provisioning like pack/unpack of a domain on cluster nodes.
Passwords in Chef can be defined in databags and in Puppet you need to do it yourself in Hiera.
They have dependencies to each other and will automatically include or require the depended manifest or recipe.
The default recipe/manifest does not do anything, you have to use the specific recipes or manifests. In Puppet the default manifest will be used to set some module defaults.
Detailed overview
These cookbooks and modules contains recipes and manifests with resource/providers in Chef and type/providers for Puppet. The recipes and manifest are a quick way to get you started and these minimal recipes are just calling the matching resource/type providers. You can call these yourself in one of your own cookbooks.
fmw_jdk
install, installs JDK 7 or 8 and supports the following distributions like RPM on RedHat Family distributions, tar.gz for Linux and Solaris, Tar.Z for Solaris and exe for Windows
rng_service, configure the urandom rng service on RedHat or Debian Linux distributions.
fmw_wls
setup, optional step and will create a Linux or Solaris operating system user and group will be used by the install recipe/manifest.
install, installs Weblogic 10.3.6 (11g), 12.1.1, 12.1.2, 12.1.3 or 12.2.1 (12c). This can also install the WebLogic infrastructure edition when version is > 12.1.2.
fmw_bsu
weblogic, patch WebLogic 10.3.6, 12.1.1 middleware home, will not work on 12.1.2 or higher (use fmw_opatch)
fmw_opatch
weblogic, patch WebLogic 12.1.2 or higher
service_bus, patch Service Bus 11g or 12c
soa_suite, patch SOA Suite 11g or 12c
fmw_inst
jrf, install JRF/ ADF runtime 11g, 12c
service_bus, install Service Bus 11g, 12c
soa_suite, install SOA Suite 11g, 12c
mft, install MFT 12c
oim, install OIM 11.1.2.3
webcenter, install WebCenter 11g
fmw_rcu
common, creates a common 11g,12c repository for ADF/JRF, EM, OPSS etc.
soa_suite, creates a SOA Suite 11g,12c repository with can be used for SOA Suite with BAM, ESS or Oracle Service Bus
fmw_domain
domain, creates a 11g or 12c WebLogic domain in development mode
nodemanager, configure and starts the nodemanager service
adminserver, starts the adminserver by connecting to the nodemanager
extension_jrf, extend the domain with JRF/ADF together with EM (Enterprise Manager webapp), 12c requires a common/soa_suite rcu repository
extension_service_bus, extend the domain with Oracle Service Bus, requires a soa_suite rcu repository
extension_soa_suite, extend the domain with Oracle SOA Suite, requires a soa_suite rcu repository
extension_bam, extend the domain with BAM of Oracle SOA Suite, requires a soa_suite rcu repository
extension_enterprise_scheduler, extend the 12c domain with Enterprise scheduler of Oracle SOA Suite, requires a soa_suite rcu repository
extension_webtier, extend the 12c domain for collocated OHS
Chef quickstart
In these examples I added all the recipes to the run list even the required recipes are automatically included but this is makes it easier to explain these examples.
fmw is the namespace for all the FMW global attributes, like java_home_dir or middleware_home_dir.
We start by installing JDK 8 with WebLogic 12.1.3 on a Windows host. To do so we need to add the install recipe of the fmw_jdk and fmw_wls cookbook. Set some global fmw parameters like java_home_dir and weblogic_home_dir after this we need to define the source attributes of fmw_jdk and fmw_wls cookbooks.
We can do the same on a Linux host but in this case I want to make sure that the jdk cookbook also configures the rngd service so I don’t ran out of entropy and the wls cookbook creates the Linux Oracle user and it’s group. In this example I will use JDK7 (on RedHat systems like OEL, CentOS you can also use the rpm as source_file) and install the WebLogic 12.1.3 Infrastructure edition.
In the cookbook we defined the following attributes with its defaults. You can always override these attributes or remove it from the attributes file when you are happy with the defaults.
default['fmw']['version'] = '12.1.3' # 10.3.6|12.1.1|12.1.2|12.1.3|12.2.1
default['fmw_wls']['install_type'] = 'wls' # infra or wls
if platform_family?('windows')
default['fmw']['middleware_home_dir'] = 'C:/oracle/middleware'
default['fmw']['ora_inventory_dir'] = 'C:\\Program Files\\Oracle\\Inventory'
default['fmw']['tmp_dir'] = 'C:/temp'
else
default['fmw']['middleware_home_dir'] = '/opt/oracle/middleware'
default['fmw']['os_user'] = 'oracle'
default['fmw']['os_group'] = 'oinstall'
default['fmw']['os_shell'] = '/bin/bash'
end
case platform_family
when 'debian', 'rhel'
default['fmw']['orainst_dir'] = '/etc'
default['fmw']['user_home_dir'] = '/home'
default['fmw']['ora_inventory_dir'] = '/home/oracle/oraInventory'
default['fmw']['tmp_dir'] = '/tmp'
when 'solaris2'
default['fmw']['orainst_dir'] = '/var/opt/oracle'
default['fmw']['user_home_dir'] = '/export/home'
default['fmw']['ora_inventory_dir'] = '/export/home/oracle/oraInventory'
default['fmw']['tmp_dir'] = '/var/tmp'
end
Next example is to create a simple WebLogic domain. In this case we need to add the domain, nodemanager and adminserver recipes to the run list. This will create the WebLogic domain and after this we can configure/start the node manager service and startup the WebLogic admin server.
This domain databag entry can be used for multi-node provisioning or to encrypt your passwords.
In the last part we can do the same for Solaris but this time we will use WebLogic 10.3.6/11g and extend the domain with some machines, servers and clusters.
For Solaris the JDK7 installation is a bit different in this case we need to provide 2 JDK source files.
We start by installing JDK 8 with WebLogic 12.1.3 on a Windows host. To do so we need to add the execute the install manifests of the fmw_jdk and fmw_wls modules. Also we override the middleware_home_dir attribute of the fmw_wls module.
We can do the same on a Linux host but in this case I want to make sure that the jdk module also configures the rngd service so I don’t ran out of entropy and the wls module creates the Linux Oracle user and it’s group. In this example I will use JDK7 (on RedHat systems like OEL, CentOS you can also use the rpm as source_file) and install the WebLogic 12.1.3 Infrastructure edition.
On class{ ‘fmw_wls’ } I can set the wls module defaults which will be used by the fmw_wls::install manifest. Also with Class[‘fmw_wls::setup’] -> Class[‘fmw_wls::install’]. I will make sure that the setup manifest is executed before the wls install manifest.
node default {
include fmw_jdk::rng_service
$java_home_dir = '/usr/java/jdk1.7.0_79'
$version = '12.1.3'
$middleware_home_dir = '/opt/oracle/middleware_1213'
Class['fmw_wls::setup'] ->
Class['fmw_wls::install']
class { 'fmw_jdk::install':
java_home_dir => $java_home_dir,
source_file => '/software/jdk-7u79-linux-x64.tar.gz'
}
class { 'fmw_wls':
version => $version,
middleware_home_dir => $middleware_home_dir
}
include fmw_wls::setup
class { 'fmw_wls::install':
java_home_dir => $java_home_dir,
source_file => '/software/fmw_12.1.3.0.0_infrastructure.jar',
install_type => 'infra'
}
}
In the wls module we have the following params manifest which contains the defaults values. You can always override these attributes on the fmw_wls class or remove it from the class when you are happy with the defaults.
Next example is to create a simple WebLogic domain. In this case we need to invoke the domain, nodemanager and adminserver manifests. This will create the WebLogic domain and after this we can configure/start the node manager service and startup the WebLogic admin server.
node default {
include fmw_jdk::rng_service
$java_home_dir = '/usr/java/jdk1.8.0_74'
$version = '12.1.3'
$middleware_home_dir = '/opt/oracle/middleware_1213'
$weblogic_home_dir = '/opt/oracle/middleware_1213/wlserver'
Class['fmw_wls::setup'] ->
Class['fmw_wls::install']
class { 'fmw_jdk::install':
java_home_dir => $java_home_dir,
source_file => '/software/jdk-8u74-linux-x64.tar.gz'
}
class { 'fmw_wls':
version => $version,
middleware_home_dir => $middleware_home_dir,
}
include fmw_wls::setup
class { 'fmw_wls::install':
java_home_dir => $java_home_dir,
source_file => '/software/fmw_12.1.3.0.0_wls.jar'
}
class { 'fmw_domain':
version => $version,
java_home_dir => $java_home_dir,
middleware_home_dir => $middleware_home_dir,
weblogic_home_dir => $weblogic_home_dir,
domains_dir => '/opt/oracle/middleware_1213/user_projects/domains',
apps_dir => '/opt/oracle/middleware_1213/user_projects/applications',
domain_name => 'base',
weblogic_password => 'welcome1',
adminserver_listen_address => '10.10.10.81',
adminserver_listen_port => 7001,
nodemanager_listen_address => '10.10.10.81',
nodemanager_port => 5556
}
include fmw_domain::domain
include fmw_domain::nodemanager
include fmw_domain::adminserver
}
In the last part we can do the same for Solaris but this time we will use WebLogic 10.3.6/11g and extend the domain with some machines, servers and clusters.
For Solaris the JDK7 installation is a bit different in this case we need to provide 2 JDK source files.
node default {
$java_home_dir = '/usr/jdk/instances/jdk1.7.0_79'
$version = '10.3.6'
$middleware_home_dir = '/opt/oracle/middleware_11g'
$weblogic_home_dir = '/opt/oracle/middleware_11g/wlserver_10.3'
Class['fmw_wls::setup'] ->
Class['fmw_wls::install']
class { 'fmw_jdk::install':
java_home_dir => $java_home_dir,
source_file => '/software/jdk-7u79-solaris-i586.tar.gz',
source_x64_file => '/software/jdk-7u79-solaris-x64.tar.gz'
}
class { 'fmw_wls':
version => $version,
middleware_home_dir => $middleware_home_dir
}
include fmw_wls::setup
class { 'fmw_wls::install':
java_home_dir => $java_home_dir,
source_file => '/software/wls1036_generic.jar'
}
class { 'fmw_domain':
version => $version,
java_home_dir => $java_home_dir,
middleware_home_dir => $middleware_home_dir,
weblogic_home_dir => $weblogic_home_dir,
domains_dir => '/opt/oracle/middleware_11g/user_projects/domains',
apps_dir => '/opt/oracle/middleware_11g/user_projects/applications',
domain_name => 'base',
weblogic_password => 'welcome1',
adminserver_listen_address => '10.10.10.81',
adminserver_listen_port => 7001,
nodemanager_listen_address => '10.10.10.81',
nodemanager_port => 5556
}
class { 'fmw_domain::domain':
nodemanagers => [ { "id" => "node1",
"listen_address" => "10.10.10.81"
},
{ "id" => "node2",
"listen_address" => "10.10.10.81"
}],
servers => [
{ "id" => "server1",
"nodemanager" => "node1",
"listen_address" => "10.10.10.81",
"listen_port" => 8001,
"arguments" => "-XX:PermSize=256m -XX:MaxPermSize=512m -Xms1024m -Xmx1024m"
},
{ "id" => "server2",
"nodemanager" => "node2",
"listen_address" => "10.10.10.81",
"listen_port" => 8002,
"arguments" => "-XX:PermSize=256m -XX:MaxPermSize=512m -Xms1024m -Xmx1024m"
},
{ "id" => "server3",
"nodemanager" => "node1",
"listen_address" => "10.10.10.81",
"listen_port" => 9001,
"arguments" => "-XX:PermSize=256m -XX:MaxPermSize=512m -Xms1024m -Xmx1024m"
}],
clusters => [
{ "id" => "cluster1",
"members" => ["server1","server2"]
},
{ "id" => "cluster2",
"members" => ["server3"]
}]
}
include fmw_domain::nodemanager
include fmw_domain::adminserver
}
]]>https://redstack.dev/2016/03/21/oracle-publishes-official-chef-and-puppet-samples-on-github/feed/4markxnelsonedwinUsing Coherence on Oracle Java Cloud Service
https://redstack.dev/2015/06/15/using-coherence-on-oracle-java-cloud-service/
https://redstack.dev/2015/06/15/using-coherence-on-oracle-java-cloud-service/#respond<![CDATA[Mark Nelson]]>Mon, 15 Jun 2015 01:24:59 +0000<![CDATA[Uncategorized]]>http://redstack.wordpress.com/2015/06/15/using-coherence-on-oracle-java-cloud-service/<![CDATA[Originally posted on Coherence Down Under: There are a number of Tutorials on using Oracle’s Java Cloud Service (JCS). One in particular shows how to configure a Coherence instance in JCS. See here for more details. For the full list…]]><![CDATA[
There are a number of Tutorials on using Oracle’s Java Cloud Service (JCS). One in particular shows how to configure a Coherence instance in JCS. See here for more details.
]]>https://redstack.dev/2015/06/15/using-coherence-on-oracle-java-cloud-service/feed/0markxnelsonDocker panel discussion on OTN ArchBeat
https://redstack.dev/2015/04/03/docker-panel-discussion-on-otn-archbeat/
https://redstack.dev/2015/04/03/docker-panel-discussion-on-otn-archbeat/#respond<![CDATA[Mark Nelson]]>Thu, 02 Apr 2015 23:16:33 +0000<![CDATA[Uncategorized]]><![CDATA[Docker]]><![CDATA[OTN]]>http://redstack.wordpress.com/?p=3178<![CDATA[Recently, I participated in an OTN ArchBeat panel discussion on Docker with Edwin Biemond, Lucas Jellema, Jim Clark and Matt Wright. We chatted about topics like Docker’s readiness for production use, how Docker and containerization in general impact our views … Continue reading →]]><![CDATA[
Recently, I participated in an OTN ArchBeat panel discussion on Docker with Edwin Biemond, Lucas Jellema, Jim Clark and Matt Wright. We chatted about topics like Docker’s readiness for production use, how Docker and containerization in general impact our views on provisioning tools like Chef and Puppet, how Docker fits in with Continuous Delivery, and the importance of the ecosystem that is needed around Docker to make the whole thing work.
]]>https://redstack.dev/2015/04/03/docker-panel-discussion-on-otn-archbeat/feed/0markxnelson“Getting Started with Oracle Maven Repository” from the OTN Virtual Technology Summit
https://redstack.dev/2015/03/04/getting-started-with-oracle-maven-repository-from-the-otn-virtual-technology-summit/
https://redstack.dev/2015/03/04/getting-started-with-oracle-maven-repository-from-the-otn-virtual-technology-summit/#respond<![CDATA[Mark Nelson]]>Wed, 04 Mar 2015 07:49:08 +0000<![CDATA[Uncategorized]]><![CDATA[Maven]]><![CDATA[Oracle Maven Repository]]>http://redstack.wordpress.com/?p=3173<![CDATA[Whew! The third and final broadcast of the Oracle Technology Network Virtual Technology Summit just finished with the APAC broadcast a few minutes ago. If you missed it – I had a session called “Getting Started with the Oracle Maven … Continue reading →]]><![CDATA[
Whew! The third and final broadcast of the Oracle Technology Network Virtual Technology Summit just finished with the APAC broadcast a few minutes ago. If you missed it – I had a session called “Getting Started with the Oracle Maven Repository” which shows you how to use our new repository. The video is available for replay on OTN – just go here.
Don’t forget that Nexus and Artifactory also both have support for proxying the Oracle Maven Repository now!
]]>https://redstack.dev/2015/03/04/getting-started-with-oracle-maven-repository-from-the-otn-virtual-technology-summit/feed/0markxnelsonNexus Support for Oracle Maven Repository
https://redstack.dev/2015/02/20/nexus-support-for-oracle-maven-repository/
https://redstack.dev/2015/02/20/nexus-support-for-oracle-maven-repository/#respond<![CDATA[Mark Nelson]]>Fri, 20 Feb 2015 01:50:16 +0000<![CDATA[Uncategorized]]><![CDATA[Maven]]><![CDATA[Nexus]]><![CDATA[Oracle Maven Repository]]>http://redstack.wordpress.com/?p=3171<![CDATA[More good news, Nexus 2.11.2 is now available and supports proxying the Oracle Maven Repository, in both the OSS and Pro versions. You can find all the details here including a video demonstration. The Sonatype guys, especially Manfred Moser (@simpligility) … Continue reading →]]><![CDATA[
More good news, Nexus 2.11.2 is now available and supports proxying the Oracle Maven Repository, in both the OSS and Pro versions. You can find all the details here including a video demonstration.
The Sonatype guys, especially Manfred Moser (@simpligility) have been great to work with, and if you are not sure which repository to use, I would certainly encourage you to look at Nexus. They certainly seem to be easy to work with, listen to their users, are willing to collaborate and support the community.