Skip to content

[DO NOT MERGE] feat: LDBC benchmark and tests #547

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 5 commits into
base: master
Choose a base branch
from

Conversation

SemyonSinchenko
Copy link
Collaborator

What changes were proposed in this pull request?

  1. Testing GraphFrames on LDBC graphs + checking the results with a references
  2. JMH benchmarks on LDBC graphs

Why are the changes needed?

  1. We should make GraphFrames be able to pass the LDBC at least with correct results
  2. Initial development of performance investigations (as described in feat: add small-medium sized benchmarks #532)

@rjurney rjurney self-requested a review March 19, 2025 20:41
Copy link
Collaborator

@rjurney rjurney left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

if (Files.notExists(resourcesDir)) {
Files.createDirectory(resourcesDir)
}
val connection = LDBCUrl.openConnection()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We’re doing this manually to avoid a dependency, I take it?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rjurney For me, the perfect solution would be to grate the repository graphframes/graphframes-testing, place all the LDBC data here, add downloading scripts and use it as a git submodule in main graphframes. What do you think about it?

.text(resourcesDir.resolve(properties.getProperty(s"graph.${caseName}.edge-file")).toString)
.withColumn("split", split(col("value"), " "))
.select(
col("split").getItem(0).cast(LongType).alias(GraphFrame.SRC),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same thing here, for CSV?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you are right, I can read as a CSV with schema and a space as a separator.

val enabled = spark.conf.getOption("spark.sql.adaptive.enabled")
try {
// disable AQE
spark.conf.set("spark.sql.adaptive.enabled", value = false)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why disable AQE? I’ve found some algorithms can be painfully slow without it. Avoid bugs?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR at the moment is for testing / sharing / discussions. I was trying to disable the AQE after thread in mailing list

@SemyonSinchenko SemyonSinchenko changed the title feat: LDBC benchmark and tests [DO NOT MERGE] feat: LDBC benchmark and tests Mar 20, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants