Skip to content

Sync Dev #378

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 82 commits into from
May 8, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
82 commits
Select commit Hold shift + click to select a range
31e44eb
Cleanups for openai-call
arjunsuresh Apr 21, 2025
a6c5374
[Automated Commit] Format Codebase [skip ci]
github-actions[bot] Apr 21, 2025
3c5e7e2
Merge pull request #368 from GATEOverflow/dev
arjunsuresh Apr 21, 2025
f92df25
Added run files for openai call
arjunsuresh Apr 21, 2025
57ad7f2
Merge pull request #369 from GATEOverflow/dev
arjunsuresh Apr 21, 2025
33840ec
Merge pull request #370 from mlcommons/main
anandhu-eng Apr 22, 2025
55173e6
skip authentication when service account credentials are provided
anandhu-eng Apr 22, 2025
7ba537b
change the remote name
anandhu-eng Apr 22, 2025
8c59b5d
Merge pull request #371 from anandhu-eng/bypass-auth
anandhu-eng Apr 22, 2025
c82bca4
[Automated Commit] Format Codebase [skip ci]
github-actions[bot] Apr 22, 2025
01d67d8
fix import
anandhu-eng Apr 22, 2025
e852abf
Merge pull request #375 from anandhu-eng/fiximports
arjunsuresh Apr 22, 2025
8ea9e55
path string fix
anandhu-eng Apr 22, 2025
c2cdade
Fix dgl version for mlperf inference rgat
arjunsuresh Apr 22, 2025
82a0ae1
Merge branch 'dev' into dev
arjunsuresh Apr 22, 2025
4976801
Merge pull request #377 from GATEOverflow/dev
arjunsuresh Apr 22, 2025
5e53ee4
fix quotes
anandhu-eng Apr 22, 2025
c8c969b
add space in MLC repo folder
anandhu-eng Apr 22, 2025
432a9cd
[Automated Commit] Format Codebase [skip ci]
github-actions[bot] Apr 22, 2025
3359e6a
test commit
anandhu-eng Apr 22, 2025
713ce33
better handling of fstring
anandhu-eng Apr 22, 2025
1595a01
test commit - closed division run on pull request target
anandhu-eng Apr 22, 2025
128d798
revert test commit
anandhu-eng Apr 22, 2025
e1a9051
Merge pull request #376 from anandhu-eng/path-str-fix
arjunsuresh Apr 22, 2025
bdb6cff
fix command generation
anandhu-eng Apr 23, 2025
1b1206c
Merge pull request #379 from anandhu-eng/path-str-fix
anandhu-eng Apr 23, 2025
da86f8d
test commit - fix command generation
anandhu-eng Apr 23, 2025
e0eab41
revert the workflow change
anandhu-eng Apr 23, 2025
375f22c
Merge pull request #380 from anandhu-eng/path-str-fix
anandhu-eng Apr 23, 2025
e67da9f
fix command generation - paths with space
anandhu-eng Apr 23, 2025
7a4b237
Merge pull request #381 from anandhu-eng/path-str-fix
anandhu-eng Apr 23, 2025
c881e13
fix for handling space
anandhu-eng Apr 23, 2025
e26b126
Merge pull request #383 from anandhu-eng/path-str-fix
anandhu-eng Apr 23, 2025
9ceafe4
fixes for path issues
anandhu-eng Apr 23, 2025
a6316bc
test commit
anandhu-eng Apr 23, 2025
ea16995
Merge pull request #384 from anandhu-eng/path-str-fix
anandhu-eng Apr 23, 2025
dcd43fb
commit for command formation
anandhu-eng Apr 23, 2025
5825ecd
test commit
anandhu-eng Apr 23, 2025
a54971b
[Automated Commit] Format Codebase [skip ci]
github-actions[bot] Apr 23, 2025
e42f643
fix link issue
anandhu-eng Apr 23, 2025
ca8e140
Merge branch 'path-str-fix' of https://github.com/anandhu-eng/mlperf-…
anandhu-eng Apr 23, 2025
9fdc701
[Automated Commit] Format Codebase [skip ci]
github-actions[bot] Apr 23, 2025
daa142b
update customize.py
anandhu-eng Apr 23, 2025
b31b521
Merge branch 'path-str-fix' of https://github.com/anandhu-eng/mlperf-…
anandhu-eng Apr 23, 2025
f5ac959
[Automated Commit] Format Codebase [skip ci]
github-actions[bot] Apr 23, 2025
5720689
update customize.py
anandhu-eng Apr 23, 2025
1bc4b05
[Automated Commit] Format Codebase [skip ci]
github-actions[bot] Apr 23, 2025
3d95aa4
update customize.py
anandhu-eng Apr 23, 2025
f5928e0
test commit - handle expansion at runtime
anandhu-eng Apr 23, 2025
6861812
Merge pull request #385 from anandhu-eng/path-str-fix
anandhu-eng Apr 23, 2025
3e460fc
fix for path issue
anandhu-eng Apr 23, 2025
fbd5fc6
Merge pull request #386 from anandhu-eng/path-str-fix
anandhu-eng Apr 23, 2025
0cf6e90
fix for space in path
anandhu-eng Apr 23, 2025
c90c380
Merge pull request #387 from anandhu-eng/path-str-fix
anandhu-eng Apr 23, 2025
00c491c
fixes the output path when there is space - compiler linkage
anandhu-eng Apr 23, 2025
b31dd66
Merge pull request #388 from anandhu-eng/path-str-fix
anandhu-eng Apr 23, 2025
2b7c8fc
fix space in path issue for dump freeze
anandhu-eng Apr 23, 2025
6248118
Merge pull request #389 from anandhu-eng/path-str-fix
anandhu-eng Apr 23, 2025
b063ffa
run benchmark with forked inference repo
anandhu-eng Apr 23, 2025
0491f9d
Merge pull request #390 from anandhu-eng/path-str-fix
anandhu-eng Apr 23, 2025
f4bda4f
corrected git repo link
anandhu-eng Apr 23, 2025
d1bc68d
Merge pull request #391 from anandhu-eng/path-str-fix
anandhu-eng Apr 23, 2025
c5ec48a
fix for space in path
anandhu-eng Apr 23, 2025
8deaeb0
fix for paths
anandhu-eng Apr 23, 2025
6128212
Merge pull request #392 from anandhu-eng/path-str-fix
anandhu-eng Apr 24, 2025
61ebea3
Update test-mlperf-inference-rgat.yml
arjunsuresh Apr 24, 2025
0279a9c
Update test-mlperf-inference-mlcommons-cpp-resnet50.yml
arjunsuresh Apr 24, 2025
5b72b19
Merge pull request #394 from GATEOverflow/dev
arjunsuresh Apr 24, 2025
0fa0e85
Update test-amd-mlperf-inference-implementations.yml
arjunsuresh Apr 24, 2025
ec70c45
Update test-mlperf-inference-mlcommons-cpp-resnet50.yml
arjunsuresh Apr 24, 2025
53162a1
Update run-tests-on-modified-meta.yml
arjunsuresh Apr 24, 2025
20b968f
Merge branch 'dev' into dev
arjunsuresh Apr 24, 2025
93a986c
Update test-mlperf-inference-rgat.yml
arjunsuresh Apr 24, 2025
3771214
Merge branch 'dev' into dev
arjunsuresh Apr 24, 2025
4e3caad
Update test-mlperf-inference-retinanet.yml
arjunsuresh Apr 24, 2025
73e2e57
Merge pull request #395 from GATEOverflow/dev
arjunsuresh Apr 24, 2025
3c7829b
Replace print with MLC Logger (#396)
anandhu-eng Apr 24, 2025
c5bfa55
Use num_threads=1 for retinanet (#397)
arjunsuresh Apr 24, 2025
7e3d65e
added experiment to script automation (#398)
anandhu-eng Apr 25, 2025
06b95fa
[Automated Commit] Format Codebase [skip ci]
github-actions[bot] Apr 25, 2025
9fba422
add --multi-thread-streams=0 for rclone version >= 1.60.0 (#402)
anandhu-eng May 6, 2025
086a7a5
Fixes for llvm-install-src (#404)
arjunsuresh May 7, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .github/workflows/run-tests-on-modified-meta.yml
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,7 @@ jobs:
process_modified_files:
runs-on: ubuntu-latest
needs: get_modified_files
if: needs.determine_modified_files.outputs.processed_files != '[]' && needs.determine_modified_files.outputs.processed_files != ''
strategy:
fail-fast: false
matrix:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ jobs:
matrix:
python-version: [ "3.12" ]
model: [ "llama2-70b-99.9" ]

steps:
- name: Test MLPerf Inference AMD (build only) ${{ matrix.model }}
run: |
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -43,11 +43,9 @@ jobs:
fail-fast: false
matrix:
python-version: [ "3.12" ]
llvm-version: [ "15.0.6", "16.0.4", "17.0.6" ]
compiler-string: [ "--adr.compiler.tags=gcc", "--adr.compiler.tags=aocc --env.MLC_AOCC_ACCEPT_EULA=yes", "--adr.compiler.tags=llvm --adr.compiler.version=17.0.6" ]
os: [ubuntu-latest, windows-latest, macos-latest]
exclude:
- llvm-version: "15.0.6"
- llvm-version: "16.0.4"
- os: windows-latest
- os: macos-latest

Expand All @@ -63,16 +61,15 @@ jobs:
- name: Pull MLOps repository
run: |
mlc pull repo ${{ github.event.pull_request.head.repo.html_url }} --branch=${{ github.event.pull_request.head.ref }}
mlcr --quiet --tags=get,sys-utils-cm
mlcr --quiet --tags=install,prebuilt,llvm --version=${{ matrix.llvm-version }}
mlcr --quiet --tags=get,sys-utils-mlc
- name: Test MLPerf Inference MLCommons C++ ResNet50 on ${{ matrix.os }}
if: matrix.os == 'windows-latest'
run: |
mlcr app,mlperf,inference,mlcommons,cpp --submitter="MLCommons" --hw_name=gh_${{ matrix.os }} --adr.loadgen.tags=_from-pip --pip_loadgen=yes -v --quiet
- name: Test MLPerf Inference MLCommons C++ ResNet50 on ${{ matrix.os }}
if: matrix.os != 'windows-latest'
run: |
mlcr app,mlperf,inference,mlcommons,cpp --submitter="MLCommons" --hw_name=gh_${{ matrix.os }} -v --quiet
mlcr app,mlperf,inference,mlcommons,cpp --submitter="MLCommons" --hw_name=gh_${{ matrix.os }} -v --quiet ${{ matrix.compiler-string }}
- name: Randomly Execute Step
id: random-check
run: |
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,6 @@ jobs:
- uses: actions/checkout@v4
with:
fetch-depth: 0

- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v3
with:
Expand All @@ -64,7 +63,15 @@ jobs:
if: matrix.os == 'windows-latest'
run: |
git config --system core.longpaths true

- name: Export MLC_REPOS in Linux or Mac
if: matrix.os == 'ubuntu-latest' || matrix.os == 'macos-latest'
run: |
echo "MLC_REPOS=$HOME/gh action/mlc" >> $GITHUB_ENV
- name: Export MLC_REPOS in Windows
if: matrix.os == 'windows-latest'
run: |
$mlcrepos = "${env:USERPROFILE}\gh action\mlc"
"MLC_REPOS=$mlcRepos" | Out-File -FilePath $env:GITHUB_ENV -Append
- name: Install mlcflow
run: |
pip install mlcflow
Expand All @@ -77,12 +84,12 @@ jobs:
- name: Test MLPerf Inference ResNet50 (Windows)
if: matrix.os == 'windows-latest'
run: |
mlcr run-mlperf,inference,_submission,_short,_all-scenarios --division=closed --submitter="MLCommons" --pull_changes=yes --pull_inference_changes=yes --hw_name="gh_${{ matrix.os }} x86" --model=resnet50 --adr.loadgen.tags=_from-pip --pip_loadgen=yes --implementation=${{ matrix.implementation }} --backend=${{ matrix.backend }} --device=cpu --test_query_count=1000 --quiet --execution_mode=valid
mlcr run-mlperf,inference,_submission,_short,_all-scenarios --division=closed --submitter="MLCommons" --pull_changes=yes --pull_inference_changes=yes --hw_name="gh_${{ matrix.os }} x86" --model=resnet50 --adr.loadgen.tags=_from-pip --pip_loadgen=yes --implementation=${{ matrix.implementation }} --backend=${{ matrix.backend }} --device=cpu --test_query_count=1000 --quiet --execution_mode=valid --adr.inference-src.tags=_repo.https://github.com/anandhu-eng/inference,_branch.patch-34

- name: Test MLPerf Inference ResNet50 Offline(Linux/macOS)
if: matrix.os != 'windows-latest'
run: |
mlcr run-mlperf,inference,_submission,_short,_all-scenarios --division=closed --submitter="MLCommons" --pull_changes=yes --pull_inference_changes=yes --hw_name="gh_${{ matrix.os }} x86" --model=resnet50 --implementation=${{ matrix.implementation }} --backend=${{ matrix.backend }} --device=cpu --test_query_count=1000 --quiet --execution_mode=valid
mlcr run-mlperf,inference,_submission,_short,_all-scenarios --division=closed --submitter="MLCommons" --pull_changes=yes --pull_inference_changes=yes --hw_name="gh_${{ matrix.os }} x86" --model=resnet50 --implementation=${{ matrix.implementation }} --backend=${{ matrix.backend }} --device=cpu --test_query_count=1000 --quiet --execution_mode=valid --adr.inference-src.tags=_repo.https://github.com/anandhu-eng/inference,_branch.patch-34

# Step for Linux/MacOS
- name: Randomly Execute Step (Linux/MacOS)
Expand Down
84 changes: 4 additions & 80 deletions .github/workflows/test-mlperf-inference-retinanet.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,34 +9,8 @@ on:
- '!**.md'

jobs:
fetch-secret:
runs-on: ubuntu-latest
outputs:
encrypted_secret: ${{ steps.encrypt-secret.outputs.encrypted_secret }}
steps:
- name: Load secret
id: op-load-secret
uses: 1password/load-secrets-action@v2
with:
export-env: false
env:
OP_SERVICE_ACCOUNT_TOKEN: ${{ secrets.OP_SERVICE_ACCOUNT_TOKEN }}
PAT: op://7basd2jirojjckncf6qnq3azai/bzbaco3uxoqs2rcyu42rvuccga/credential

- name: Encrypt secret
id: encrypt-secret
env:
ENCRYPTION_KEY: ${{ secrets.ENCRYPTION_KEY }}
run: |
# AES-256 encrypt
encrypted=$(echo "${{ steps.op-load-secret.outputs.pat }}" | \
openssl enc -e -aes-256-cbc -md sha512 -pbkdf2 -iter 100000 \
-pass pass:"$ENCRYPTION_KEY" -base64 -A)

echo "encrypted_secret=$encrypted" >> $GITHUB_OUTPUT

mlc-run:
needs: [fetch-secret]
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
Expand All @@ -45,13 +19,16 @@ jobs:
python-version: [ "3.12" ]
backend: [ "onnxruntime", "pytorch" ]
implementation: [ "python", "cpp" ]
compiler-string: [ "", "--adr.compiler.tags=aocc --env.MLC_AOCC_ACCEPT_EULA=yes" ]
exclude:
- backend: pytorch
implementation: cpp
- os: windows-latest
implementation: cpp
- os: macos-latest
implementation: cpp
- implementation: python
compiler-string: "--adr.compiler.tags=aocc --env.MLC_AOCC_ACCEPT_EULA=yes"

steps:
- uses: actions/checkout@v3
Expand All @@ -78,57 +55,4 @@ jobs:
- name: Test MLPerf Inference Retinanet using ${{ matrix.backend }} on ${{ matrix.os }}
if: matrix.os != 'windows-latest'
run: |
mlcr run,mlperf,inference,generate-run-cmds,_submission,_short --submitter="MLCommons" --pull_changes=yes --pull_inference_changes=yes --hw_name=gh_${{ matrix.os }}_x86 --model=retinanet --implementation=${{ matrix.implementation }} --backend=${{ matrix.backend }} --device=cpu --scenario=Offline --test_query_count=5 --quiet -v --target_qps=1

# Step for Linux/MacOS
- name: Randomly Execute Step (Linux/MacOS)
if: runner.os != 'Windows'
run: |
RANDOM_NUMBER=$((RANDOM % 10))
echo "Random number is $RANDOM_NUMBER"
if [ "$RANDOM_NUMBER" -eq 0 ]; then
echo "run_step=true" >> $GITHUB_ENV
else
echo "run_step=false" >> $GITHUB_ENV
fi

# Step for Windows
- name: Randomly Execute Step (Windows)
if: runner.os == 'Windows'
run: |
$RANDOM_NUMBER = Get-Random -Maximum 10
Write-Host "Random number is $RANDOM_NUMBER"
if ($RANDOM_NUMBER -eq 0) {
Write-Host "run_step=true" | Out-File -FilePath $Env:GITHUB_ENV -Append
} else {
Write-Host "run_step=false" | Out-File -FilePath $Env:GITHUB_ENV -Append
}

- name: Decrypt secret
id: decrypt-secret
shell: bash
env:
ENCRYPTION_KEY: ${{ secrets.ENCRYPTION_KEY }}
encrypted_secret: ${{ needs.fetch-secret.outputs.encrypted_secret }}
run: |
echo "Running on OS: ${{ matrix.os }}"

# Decrypt
decrypted=$(echo "$encrypted_secret" | \
openssl enc -d -aes-256-cbc -md sha512 -pbkdf2 -iter 100000 \
-pass pass:"$ENCRYPTION_KEY" -base64 -A)

echo "::add-mask::$decrypted"
echo "DECRYPTED_SECRET=$decrypted" >> $GITHUB_OUTPUT
- name: Push Results
env:
GITHUB_TOKEN: ${{ steps.decrypt-secret.outputs.decrypted_secret }}
if: github.repository_owner == 'mlcommons' && env.run_step == 'true'
run: |
git config --global user.name "mlcommons-bot"
git config --global user.email "mlcommons-bot@users.noreply.github.com"
git config --global credential.https://github.com.helper ""
git config --global credential.https://github.com.helper "!gh auth git-credential"
git config --global credential.https://gist.github.com.helper ""
git config --global credential.https://gist.github.com.helper "!gh auth git-credential"
mlcr push,github,mlperf,inference,submission --repo_url=https://github.com/mlcommons/mlperf_inference_test_submissions_v5.0 --repo_branch=auto-update --commit_message="Results from R50 GH action on ${{ matrix.os }}" --quiet
mlcr run,mlperf,inference,generate-run-cmds,_submission,_short --submitter="MLCommons" --pull_changes=yes --pull_inference_changes=yes --hw_name=gh_${{ matrix.os }}_x86 --model=retinanet --implementation=${{ matrix.implementation }} --backend=${{ matrix.backend }} --device=cpu --scenario=Offline --test_query_count=5 --quiet -v --target_qps=1 ${{ matrix.compiler-string }}
23 changes: 1 addition & 22 deletions .github/workflows/test-mlperf-inference-rgat.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,9 @@ on:
- '!**.md'

jobs:

rgat-inference-run:
name: ${{ matrix.os }} - ${{ matrix.backend }} - ${{ matrix.implementation }}
needs: [fetch-secret]
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
Expand All @@ -38,24 +38,3 @@ jobs:
run: |
mlcr run,mlperf,inference,generate-run-cmds,_submission,_short --adr.inference-src.tags=_branch.dev --pull_changes=yes --pull_inference_changes=yes --submitter="MLCommons" --hw_name=gh_${{ matrix.os }}_x86 --model=rgat --implementation=${{ matrix.implementation }} --backend=${{ matrix.backend }} --device=cpu --scenario=Offline --test_query_count=500 --adr.compiler.tags=gcc --category=datacenter --quiet -v --target_qps=1

- name: Load secret
if: github.repository_owner == 'mlcommons' && env.run_step == 'true'
id: op-load-secret
uses: 1password/load-secrets-action@v2
with:
export-env: false
env:
OP_SERVICE_ACCOUNT_TOKEN: ${{ secrets.OP_SERVICE_ACCOUNT_TOKEN }}
PAT: op://7basd2jirojjckncf6qnq3azai/bzbaco3uxoqs2rcyu42rvuccga/credential

- name: Push Results
env:
GITHUB_TOKEN: ${{ steps.op-load-secret.outputs.PAT }}
run: |
git config --global user.name "mlcommons-bot"
git config --global user.email "mlcommons-bot@users.noreply.github.com"
git config --global credential.https://github.com.helper ""
git config --global credential.https://github.com.helper "!gh auth git-credential"
git config --global credential.https://gist.github.com.helper ""
git config --global credential.https://gist.github.com.helper "!gh auth git-credential"
mlcr push,github,mlperf,inference,submission --repo_url=https://github.com/mlcommons/mlperf_inference_test_submissions_v5.0 --repo_branch=auto-update --commit_message="Results from R50 GH action on ${{ matrix.os }}" --quiet
9 changes: 8 additions & 1 deletion automation/script/module.py
Original file line number Diff line number Diff line change
Expand Up @@ -3188,7 +3188,7 @@ def _update_variation_meta_with_dynamic_suffix(
item_value[i] = l_item.replace(
"#", variation_tag_dynamic_suffix)
else:
value[item] = value[item].replace(
value[item] = str(value[item]).replace(
"#", variation_tag_dynamic_suffix)

else: # scalar value, never used?
Expand Down Expand Up @@ -4469,6 +4469,13 @@ def docker(self, i):
from script.docker import docker_run
return docker_run(self, i)

############################################################
# portion for experiment action.
# as of now, the experiment action directly calls the run action.
# in the future, we will add more functionality to the experiment action.
def experiment(self, i):
return self.run(i)

##########################################################################

def _available_variations(self, i):
Expand Down
2 changes: 1 addition & 1 deletion script/app-image-classification-onnx-py/customize.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,6 @@ def postprocess(i):
os_info = i['os_info']
env = i['env']
state = i['state']

automation = i['automation']
logger = automation.action_object.logger

Expand Down Expand Up @@ -48,6 +47,7 @@ def postprocess(i):
json.dump(data, f, ensure_ascii=False, indent=4)
except Exception as e:
logger.warning('CM warning: {}'.format(e))
logger.warning('CM warning: {}'.format(e))

try:
import yaml
Expand Down
3 changes: 2 additions & 1 deletion script/app-image-corner-detection/customize.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@ def preprocess(i):
def postprocess(i):

env = i['env']
print(env['MLC_OUTPUT'] + " generated in " + env['MLC_RUN_DIR'])
logger = i['automation'].logger
logger.info(env['MLC_OUTPUT'] + " generated in " + env['MLC_RUN_DIR'])

return {'return': 0}
8 changes: 5 additions & 3 deletions script/app-loadgen-generic-python/customize.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,8 @@ def preprocess(i):

env = i['env']

logger = i['automation'].logger

if 'MLC_ML_MODEL_FILE_WITH_PATH' not in env:
return {
'return': 1, 'error': 'Please select a variation specifying the model to run'}
Expand Down Expand Up @@ -87,9 +89,9 @@ def preprocess(i):

env['MLC_RUN_OPTS'] = run_opts

print('')
print('Assembled flags: {}'.format(run_opts))
print('')
logger.info('')
logger.info('Assembled flags: {}'.format(run_opts))
logger.info('')

return {'return': 0}

Expand Down
6 changes: 4 additions & 2 deletions script/app-mlperf-inference-dummy/customize.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,8 @@ def preprocess(i):
return {'return': 1, 'error': 'Windows is not supported in this script yet'}
env = i['env']

logger = i['automation'].logger

if env.get('MLC_MLPERF_SKIP_RUN', '') == "yes":
return {'return': 0}

Expand All @@ -29,8 +31,8 @@ def preprocess(i):
return r
run_cmd = r['run_cmd']
run_dir = r['run_dir']
print(run_cmd)
print(run_dir)
logger.info(run_cmd)
logger.info(run_dir)
return {'return': 1, 'error': 'Run command needs to be tested!'}


Expand Down
6 changes: 4 additions & 2 deletions script/app-mlperf-inference-intel/customize.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,8 @@ def preprocess(i):
return {'return': 1, 'error': 'Windows is not supported in this script yet'}
env = i['env']

logger = i['automation'].logger

if env.get('MLC_MLPERF_SKIP_RUN', '') == "yes":
return {'return': 0}

Expand Down Expand Up @@ -104,7 +106,7 @@ def preprocess(i):
os.path.dirname(env['MLC_ML_MODEL_FILE_WITH_PATH']), 'retinanet-int8-model.pth')

elif env['MLC_LOCAL_MLPERF_INFERENCE_INTEL_RUN_MODE'] == "build_harness":
print(f"Harness Root: {harness_root}")
logger.info(f"Harness Root: {harness_root}")
if "bert" in env['MLC_MODEL']:
i['run_script_input']['script_name'] = "build_bert_harness"
env['MLC_MLPERF_INFERENCE_INTEL_HARNESS_PATH'] = os.path.join(
Expand Down Expand Up @@ -162,7 +164,7 @@ def preprocess(i):
env[model_dir_name])

elif env['MLC_LOCAL_MLPERF_INFERENCE_INTEL_RUN_MODE'] == "run_harness":
print(f"Harness Root: {harness_root}")
logger.info(f"Harness Root: {harness_root}")
if env.get('MLC_MLPERF_LOADGEN_MODE', '') == "compliance":
audit_path = env['MLC_MLPERF_INFERENCE_AUDIT_PATH']
shutil.copy(audit_path, env['MLC_RUN_DIR'])
Expand Down
11 changes: 8 additions & 3 deletions script/app-mlperf-inference-mlcommons-cpp/customize.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,10 +11,15 @@ def preprocess(i):

meta = i['meta']

logger = i['automation'].logger

if os_info['platform'] == 'windows':
print('~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~')
print('WARNING: this script was not thoroughly tested on Windows and compilation may fail - please help us test and improve it!')
print('~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~')
logger.info(
'~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~')
logger.warning(
'This script was not thoroughly tested on Windows and compilation may fail - please help us test and improve it!')
logger.info(
'~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~')
# # Currently support only LLVM on Windows
# print ('# Forcing LLVM on Windows')
# r = automation.update_deps({'deps':meta['post_deps'], 'update_deps':{'compile-program': {'adr':{'compiler':{'tags':'llvm'}}}}})
Expand Down
Loading
Loading