forked from Lainports/freebsd-ports
* Catch up to build ID directory changes
* Improve usage()
* Fix a variety of small bugs
* Remove support for -ftp builds: we have not supported direct
uploading for many years due to the desire to manually inspect build
output for quality
* All data associated to a build is now localized in its own directory
named according to a build ID:
/var/portbuild/${arch}/${branch}/builds/${buildid}, where ${buildid}
is the creation time. These are actually ZFS filesystems.
* Tasks such as cloning a new build, updating a ZFS snapshot, and
cleaning up a build are exported to the "build" script, which can be
used independently.
* Creating a new build is done by ZFS cloning and takes a couple of
seconds since it is copy-on-write (i.e. no data needs to be copied).
* Ports and source trees are also cloned from pre-updated ZFS images
(updated regularly from the "updatesnap" cron job). In most cases
we do not care if we are building a ports tree that is an hour or so
old since it will become outdated almost immediately anyway, so no
matter what we do there will be times when a port has been fixed by
the time the build error is generated by a client.
* In case an up-to-the-second tree is desired, the -portscvs and
-srccvs switches update the existing ports tree via CVS.
* -noports and -nosrc can be used to prevent any automatic changes to
the ports tree. This is useful for dealing with local
modifications (e.g. for -exp builds), since the default when
creating a new build is to replace the previous trees with fresh,
pristine trees. If you forget to use this then any local changes
that are not also present in other trees will be lost.
* By default we keep two builds for each arch/branch pair. These
build IDs also may be referred to via "latest" and "previous"
symlinks. When creating a new build, the old "previous" build is
destroyed by default, unless it was originally created using the
-keep switch. This prevents the build from being destroyed
automatically.
* By default when a build finishes all of the clients are completely
cleaned up (i.e. all build data such as ports trees, tarballs,
client chroots, etc are deleted). This is needed to save space on
the clients. If you expect to *immediately* perform further builds
after this one completes, the -nocleanup switch prevents this step.
Otherwise they will just be set up again if further builds are
scheduled.
* Try to parallelize build pre-processing as much as possible, by
running jobs in the background wherever possible. In several places
we operate on the same parts of the filesystem from multiple jobs,
so we can make good use of caching to improve performance
* Clients no longer need to be set up explicitly at the start of the
build, they will be set up on-demand when the first job is
dispatched to them. This allows fast clients or those that already
have been set up to begin building ports as soon as possible, while
slow clients are set up in the background. It also improves
robustness of client recovery, e.g. if the client was offline at the
time of build startup but later brought back online.
* Optimize copying back in the previous set of restricted packages by
hardlinking instead of copying.
TODO: The record of failed ports is arch/branch-global still. This is
the only thing preventing us from running concurrent builds of the
same arch/branch (e.g. while one is stuck building openoffice, the
next build can start to keep the cluster busy). The difficulty is
that one build from a later ports tree may signal that a build was
successful, then a phase 2 build from an earlier ports tree may
indicate that it was broken. The solution is probably to migrate this
to a real database instead of a flat file, and query it for the set of
broken ports as of a certain ports tree date.
715 lines
No EOL
20 KiB
Bash
Executable file
715 lines
No EOL
20 KiB
Bash
Executable file
#!/bin/sh
|
|
|
|
# configurable variables
|
|
pb=/var/portbuild
|
|
|
|
PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:${pb}/scripts
|
|
|
|
# writable by portmgr
|
|
umask 002
|
|
|
|
usage () {
|
|
echo "usage: arch branch buildid date [-continue] [-incremental] [-restart] [-nofinish] [-finish] [-nocleanup] [-cdrom] [-nobuild] [-noindex] [-noduds] [-norestr] [-nosrc] [-srccvs] [-noports] [-portscvs] [-noplistcheck] [-nodistfiles] [-fetch-original] [-trybroken]"
|
|
echo " -incremental : Start a new incremental build"
|
|
echo " -continue : Restart an interrupted build, skipping failed ports"
|
|
echo " -restart : Restart an interrupted build, rebuilding failed ports"
|
|
echo " -nofinish : Do not post-process upon build completion"
|
|
echo " -finish : Post-process a completed build"
|
|
echo " -nocleanup : Do not clean up and deactivate the build once it finishes"
|
|
echo " -nobuild : Only do the build preparation steps, do not build packages"
|
|
echo " -noindex : Do not build the INDEX"
|
|
echo " -noduds : Do not build the duds file"
|
|
echo " -norestr : Do not build the restricted.sh file"
|
|
echo " -nosrc : Do not update the src tree"
|
|
echo " -srccvs : Update the src tree via CVS, don't use a pre-existing snapshot"
|
|
echo " -noports : Do not update the ports tree"
|
|
echo " -portscvs : Update the ports tree via CVS, don't use a pre-existing snapshot"
|
|
echo " -noplistcheck : Don't check the plist during the build"
|
|
echo " -nodistfiles : Don't collect distfiles"
|
|
echo " -fetch-original : Fetch from original MASTER_SITE"
|
|
echo " -trybroken : Try to build BROKEN ports"
|
|
echo " -keep : Do not automatically recycle this build"
|
|
echo " -cdrom : Prepare a build for distribution on CDROM "
|
|
|
|
exit 1
|
|
}
|
|
|
|
if [ $# -lt 4 ]; then
|
|
usage
|
|
fi
|
|
|
|
arch=$1
|
|
branch=$2
|
|
buildid=$3
|
|
date=$4
|
|
shift 4
|
|
|
|
. ${pb}/scripts/buildenv
|
|
validate_env ${arch} ${branch} || usage
|
|
|
|
buildid=$(resolve ${pb} ${arch} ${branch} ${buildid})
|
|
if [ -z "${buildid}" ]; then
|
|
echo "Invalid build ID ${buildid}"
|
|
exit 1
|
|
fi
|
|
|
|
if [ -f ${pb}/${arch}/portbuild.conf ]; then
|
|
. ${pb}/${arch}/portbuild.conf
|
|
else
|
|
usage
|
|
fi
|
|
|
|
pbab=${pb}/${arch}/${branch}
|
|
|
|
trap "exit 1" 1 2 3 9 10 11 15
|
|
|
|
mailexit () {
|
|
|
|
echo | mail -s "$(basename $0) ended for ${arch}-${branch} ${buildid} at $(date)" ${mailto}
|
|
|
|
exit $1
|
|
}
|
|
|
|
srctar() {
|
|
tar cfCj ${builddir}/src-${buildid}.tbz ${builddir} src/
|
|
md5 ${builddir}/src-${buildid}.tbz > ${builddir}/src-${buildid}.tbz.md5
|
|
}
|
|
|
|
portstar() {
|
|
tar cfCj ${builddir}/ports-${buildid}.tbz ${builddir} ports/
|
|
md5 ${builddir}/ports-${buildid}.tbz > ${builddir}/ports-${buildid}.tbz.md5
|
|
}
|
|
|
|
# usage: makeindex pb arch branch builddir
|
|
makeindex () {
|
|
pb=$1
|
|
arch=$2
|
|
branch=$3
|
|
buildid=$4
|
|
builddir=$5
|
|
|
|
cd ${builddir}/ports
|
|
echo "================================================"
|
|
echo "generating index"
|
|
echo "================================================"
|
|
echo "index generation started at $(date)"
|
|
${pb}/scripts/makeindex ${arch} ${branch} ${buildid} || mailexit 1
|
|
echo "index generation ended at $(date)"
|
|
echo $(wc -l ${INDEXFILE} | awk '{print $1}') "lines in INDEX"
|
|
|
|
# Save a copy of it for the next build since ports directories may
|
|
# not be preserved
|
|
cp ${INDEXFILE} ${builddir}/bak
|
|
}
|
|
|
|
# usage: checkindex builddir
|
|
# Perform some sanity checks on the INDEX so we don't blow up later on
|
|
checkindex () {
|
|
builddir=$1
|
|
|
|
cd ${builddir}/ports
|
|
if grep -q non-existent ${INDEXFILE}; then
|
|
echo "errors in INDEX:"
|
|
grep -n non-existent ${INDEXFILE}
|
|
mailexit 1
|
|
fi
|
|
if ! awk -F '|' '{if (NF != 13) { error=1; printf("line %d: %s\n", NR, $0)}} END {if (error == 1) exit(1)}' ${INDEXFILE}; then
|
|
echo "error in INDEX"
|
|
mailexit 1
|
|
fi
|
|
}
|
|
|
|
# usage: makeduds pb arch branch builddir
|
|
makeduds () {
|
|
pb=$1
|
|
arch=$2
|
|
branch=$3
|
|
buildid=$4
|
|
builddir=$5
|
|
|
|
cd ${builddir}/ports
|
|
echo "================================================"
|
|
echo "generating duds"
|
|
echo "================================================"
|
|
echo "duds generation started at $(date)"
|
|
cp -p ${builddir}/duds ${builddir}/duds.old
|
|
if ! ${pb}/scripts/makeduds ${arch} ${branch} ${buildid}; then
|
|
echo "error(s) detected, exiting script at $(date). Failed duds list was:"
|
|
cat ${builddir}/duds
|
|
mailexit 1
|
|
fi
|
|
echo "duds generation ended at $(date)"
|
|
echo $(wc -l ${builddir}/duds | awk '{print $1}') "items in duds"
|
|
echo "duds diff:"
|
|
diff ${builddir}/duds.old ${builddir}/duds
|
|
cp -p ${builddir}/duds ${builddir}/duds.orig
|
|
}
|
|
|
|
# usage: restrictedlist pb arch branch builddir
|
|
restrictedlist () {
|
|
pb=$1
|
|
arch=$2
|
|
branch=$3
|
|
buildid=$4
|
|
builddir=$5
|
|
|
|
cd ${builddir}/ports
|
|
echo "================================================"
|
|
echo "creating restricted list"
|
|
echo "================================================"
|
|
echo "restricted list generation started at $(date)"
|
|
${pb}/scripts/makerestr ${arch} ${branch} ${buildid} || mailexit 1
|
|
echo "restricted list generation ended at $(date)"
|
|
echo $(grep -c '^#' ${builddir}/restricted.sh) "ports in ${builddir}/restricted.sh"
|
|
}
|
|
|
|
# usage: cdromlist pb arch branch builddir
|
|
cdromlist () {
|
|
pb=$1
|
|
arch=$2
|
|
branch=$3
|
|
builddir=$4
|
|
|
|
cd ${builddir}/ports
|
|
echo "================================================"
|
|
echo "creating cdrom list"
|
|
echo "================================================"
|
|
echo "cdrom list generation started at $(date)"
|
|
make ECHO_MSG=true clean-for-cdrom-list \
|
|
| sed -e "s./usr/ports/distfiles/./distfiles/.g" \
|
|
-e "s./usr/ports/./${branch}/.g" \
|
|
> ${builddir}/cdrom.sh
|
|
echo "cdrom list generation ended at $(date)"
|
|
echo $(grep -c '^#' ${builddir}/cdrom.sh) "ports in ${builddir}/cdrom.sh"
|
|
}
|
|
|
|
# XXX Should use SHA256 instead, but I'm not sure what consumes this file (if anything)
|
|
# XXX Should generate these as the packages are copied in, instead of all at once at the end
|
|
# usage: generatemd5 pb arch branch builddir
|
|
generatemd5 () {
|
|
pb=$1
|
|
arch=$2
|
|
branch=$3
|
|
builddir=$4
|
|
|
|
echo "started generating CHECKSUM.MD5 at $(date)"
|
|
cd ${builddir}/packages/All
|
|
find . -name '*.tbz' | sort | sed -e 's/^..//' | xargs md5 > CHECKSUM.MD5
|
|
echo "ended generating CHECKSUM.MD5 at $(date)"
|
|
}
|
|
|
|
|
|
dobuild() {
|
|
pb=$1
|
|
arch=$2
|
|
branch=$3
|
|
builddir=$4
|
|
phase=$5
|
|
|
|
count=0
|
|
for i in `cat ${pb}/${arch}/mlist`; do
|
|
. ${pb}/${arch}/portbuild.conf
|
|
test -f ${pb}/${arch}/portbuild.${i} && . ${pb}/${arch}/portbuild.${i}
|
|
count=$((${count}+${maxjobs}))
|
|
done
|
|
|
|
echo "================================================"
|
|
echo "building packages (phase ${phase})"
|
|
echo "================================================"
|
|
echo "started at $(date)"
|
|
phasestart=$(date +%s)
|
|
make -k -j${count} quickports all > ${builddir}/make.${phase} 2>&1 </dev/null
|
|
echo "ended at $(date)"
|
|
phaseend=$(date +%s)
|
|
echo "phase ${phase} took $(date -u -j -r $(($phaseend - $phasestart)) | awk '{print $4}')"
|
|
echo $(echo $(ls -1 ${builddir}/packages/All | wc -l) - 2 | bc) "packages built"
|
|
|
|
echo $(wc -l ${PORTSDIR}/${INDEXFILE} | awk '{print $1}') "lines in INDEX"
|
|
|
|
echo $(echo $(du -sk ${builddir}/packages | awk '{print $1}') / 1024 | bc) "MB of packages"
|
|
echo $(echo $(du -sk ${builddir}/distfiles | awk '{print $1}') / 1024 | bc) "MB of distfiles"
|
|
|
|
cd ${builddir}
|
|
if grep -qE '(ptimeout|pnohang): killing' make.${phase}; then
|
|
echo "The following port(s) timed out:"
|
|
grep -E '(ptimeout|pnohang): killing' make.${phase} | sed -e 's/^.*ptimeout:/ptimeout:/' -e 's/^.*pnohang:/pnohang:/'
|
|
fi
|
|
|
|
}
|
|
|
|
me=$(hostname)
|
|
starttime=$(date +%s)
|
|
|
|
echo "Subject: $me package building logs"
|
|
echo
|
|
echo "Called with arguments: $@"
|
|
echo "Started at ${starttime}"
|
|
|
|
nobuild=0
|
|
noindex=0
|
|
noduds=0
|
|
nosrc=0
|
|
srccvs=0
|
|
noports=0
|
|
portscvs=0
|
|
norestr=0
|
|
noplistcheck=0
|
|
cdrom=0
|
|
restart=0
|
|
cont=0
|
|
finish=0
|
|
nofinish=0
|
|
dodistfiles=1
|
|
fetch_orig=0
|
|
trybroken=0
|
|
incremental=0
|
|
keep=0
|
|
nocleanup=0
|
|
|
|
# optional arguments
|
|
while [ $# -gt 0 ]; do
|
|
case "x$1" in
|
|
x-nobuild)
|
|
nobuild=1
|
|
;;
|
|
x-noindex)
|
|
noindex=1
|
|
;;
|
|
x-noduds)
|
|
noduds=1
|
|
;;
|
|
x-cdrom)
|
|
cdrom=1
|
|
;;
|
|
x-nosrc)
|
|
nosrc=1
|
|
;;
|
|
x-srccvs)
|
|
srccvs=1
|
|
;;
|
|
x-noports)
|
|
noports=1
|
|
;;
|
|
x-portscvs)
|
|
portscvs=1
|
|
;;
|
|
x-norestr)
|
|
norestr=1
|
|
;;
|
|
x-noplistcheck)
|
|
noplistcheck=1
|
|
;;
|
|
x-nodistfiles)
|
|
dodistfiles=0
|
|
;;
|
|
x-fetch-original)
|
|
fetch_orig=1
|
|
;;
|
|
x-trybroken)
|
|
trybroken=1
|
|
;;
|
|
x-continue)
|
|
cont=1
|
|
;;
|
|
x-restart)
|
|
restart=1
|
|
;;
|
|
x-nofinish)
|
|
nofinish=1
|
|
;;
|
|
x-finish)
|
|
nobuild=1
|
|
finish=1
|
|
;;
|
|
x-incremental)
|
|
incremental=1
|
|
;;
|
|
x-keep)
|
|
keep=1
|
|
;;
|
|
x-nocleanup)
|
|
nocleanup=1
|
|
;;
|
|
*)
|
|
usage
|
|
;;
|
|
esac
|
|
shift
|
|
done
|
|
|
|
if [ "$restart" = 1 -o "$cont" = 1 -o "$finish" = 1 ]; then
|
|
skipstart=1
|
|
else
|
|
skipstart=0
|
|
fi
|
|
|
|
# XXX check for conflict between -noports and -portscvs etc
|
|
|
|
# We have valid options, start the build
|
|
|
|
echo | mail -s "$(basename $0) started for ${arch}-${branch} ${buildid} at $(date)" ${mailto}
|
|
|
|
if [ "$dodistfiles" = 1 ]; then
|
|
# XXX flip default to always collect
|
|
export WANT_DISTFILES=1
|
|
fi
|
|
|
|
if [ "$noplistcheck" = 1 ]; then
|
|
export NOPLISTCHECK=1
|
|
fi
|
|
|
|
if [ "$cdrom" = 1 ]; then
|
|
export FOR_CDROM=1
|
|
fi
|
|
|
|
if [ "$fetch_orig" = 1 ]; then
|
|
export FETCH_ORIGINAL=1
|
|
fi
|
|
|
|
if [ "$trybroken" = 1 ]; then
|
|
export TRYBROKEN=1
|
|
fi
|
|
|
|
# Start setting up build environment
|
|
|
|
if [ "${skipstart}" -eq 0 ]; then
|
|
oldbuildid=${buildid}
|
|
buildid=$(date +%Y%m%d%H%M%S)
|
|
build clone ${arch} ${branch} ${oldbuildid} ${buildid}
|
|
fi
|
|
|
|
builddir=${pbab}/builds/${buildid}
|
|
|
|
df -k | grep ${buildid}
|
|
# Set up our environment variables
|
|
buildenv ${pb} ${arch} ${branch} ${builddir}
|
|
|
|
if [ "${keep}" -eq 1 ]; then
|
|
touch ${builddir}/.keep
|
|
fi
|
|
|
|
# Mark as active so that it is not automatically cleaned up on the
|
|
# clients
|
|
touch ${builddir}/.active
|
|
|
|
# Update link to current logfile created by dopackages.wrapper
|
|
ln -sf ${pb}/${arch}/archive/buildlogs/log.${branch}.${date} \
|
|
${builddir}/build.log
|
|
|
|
if [ "$skipstart" = 0 ]; then
|
|
|
|
# Update build
|
|
|
|
if [ "$incremental" = 1 ]; then
|
|
# Stash a copy of the index since we may be about to replace
|
|
# it with the ZFS update
|
|
if [ -f ${PORTSDIR}/${INDEXFILE} ]; then
|
|
cp ${PORTSDIR}/${INDEXFILE} ${builddir}/bak/${INDEXFILE}
|
|
fi
|
|
fi
|
|
|
|
if [ ${noports} -eq 0 ]; then
|
|
if [ -L ${builddir}/ports -o ${portscvs} -eq 1 ]; then
|
|
echo "================================================"
|
|
echo "running cvs update -PAd on ${PORTSDIR}"
|
|
echo "================================================"
|
|
cd ${PORTSDIR}
|
|
cvsdone=$(date)
|
|
echo ${cvsdone} > ${builddir}/cvsdone
|
|
cvs -Rq update -PdA -D "${cvsdone}"
|
|
# XXX Check for conflicts
|
|
else
|
|
build portsupdate ${arch} ${branch} ${buildid} $@
|
|
fi
|
|
else
|
|
rm -f ${builddir}/cvsdone
|
|
fi
|
|
|
|
if [ "$incremental" = 1 ]; then
|
|
if [ -f ${builddir}/bak/${INDEXFILE} ]; then
|
|
cp ${builddir}/bak/${INDEXFILE} ${PORTSDIR}/${INDEXFILE}.old
|
|
fi
|
|
fi
|
|
# Create tarballs for distributing to clients. Should not cause
|
|
# much extra delay because we will do this in conjunction with
|
|
# recursing over the ports tree anyway just below, and might have
|
|
# just finished cvs updating, so it is likely to be in cache.
|
|
portstar &
|
|
|
|
if [ ${nosrc} -eq 0 ]; then
|
|
if [ -L ${builddir}/src -o ${srccvs} -eq 1 ]; then
|
|
echo "================================================"
|
|
echo "running cvs update -PAd on ${SRCBASE}"
|
|
echo "================================================"
|
|
cd ${SRCBASE}
|
|
if [ -z "${cvsdone}" ]; then
|
|
# Don't overwrite/create cvsdone if we didnt set it
|
|
# with the ports update
|
|
cvsdone=$(date)
|
|
fi
|
|
cvs -Rq update -PdA -D "${cvsdone}"
|
|
# XXX Check for conflicts
|
|
else
|
|
build srcupdate ${arch} ${branch} ${buildid} $@
|
|
fi
|
|
fi
|
|
srctar &
|
|
|
|
# Begin build preprocess
|
|
|
|
echo "================================================"
|
|
echo "running make checksubdirs"
|
|
echo "================================================"
|
|
cd ${PORTSDIR}
|
|
make checksubdirs
|
|
|
|
# not run in background to check return status
|
|
if [ "$noindex" = 0 ]; then
|
|
makeindex ${pb} ${arch} ${branch} ${buildid} ${builddir} || mailexit 1
|
|
fi
|
|
checkindex ${builddir}
|
|
if [ "$noduds" = 0 ]; then
|
|
makeduds ${pb} ${arch} ${branch} ${buildid} ${builddir} || mailexit 1
|
|
fi
|
|
|
|
wait # for tar creation
|
|
|
|
if [ "$trybroken" = 1 ]; then
|
|
echo "================================================"
|
|
echo "pruning stale entries from the failed ports list"
|
|
echo "================================================"
|
|
|
|
# XXX failure and newfailure are arch/branch-global for now. We
|
|
# will need to work out how to deal with updates from
|
|
# concurrent builds though (one build may fail after a more
|
|
# recent build has fixed the breakage)
|
|
cp ${pbab}/failure ${pbab}/newfailure ${builddir}/bak
|
|
lockf -k ${pbab}/failure.lock ${pb}/scripts/prunefailure ${arch} ${branch} ${builddir}
|
|
fi
|
|
fi
|
|
|
|
if [ "$skipstart" = 0 ]; then
|
|
# XXX These can happen after build start
|
|
if [ "$norestr" = 0 ]; then
|
|
restrictedlist ${pb} ${arch} ${branch} ${buildid} ${builddir} &
|
|
fi
|
|
|
|
if [ "$cdrom" = 1 ]; then
|
|
cdromlist ${pb} ${arch} ${branch} ${builddir} &
|
|
fi
|
|
|
|
${pb}/scripts/makeparallel ${arch} ${branch} ${buildid} &
|
|
|
|
cd ${builddir}
|
|
mv distfiles/ .distfiles~
|
|
rm -rf .distfiles~ &
|
|
mkdir -p distfiles/
|
|
|
|
olderrors=$(readlink ${builddir}/errors)
|
|
oldlogs=$(readlink ${builddir}/logs)
|
|
|
|
newerrors=${pb}/${arch}/archive/errorlogs/e.${branch}.${buildid}
|
|
newlogs=${pb}/${arch}/archive/errorlogs/a.${branch}.${buildid}
|
|
|
|
# Cycle out the previous symlinks
|
|
rm -f bak/errors
|
|
rm -f bak/logs
|
|
mv errors logs bak
|
|
|
|
# Create new log directories for archival
|
|
rm -rf ${newerrors}
|
|
mkdir -p ${newerrors}/old-errors
|
|
ln -sf ${newerrors} ${builddir}/errors
|
|
rm -rf ${newlogs}
|
|
mkdir -p ${newlogs}
|
|
ln -sf ${newlogs} ${builddir}/logs
|
|
|
|
echo "error logs in ${newerrors}"
|
|
if [ -f "${builddir}/cvsdone" ]; then
|
|
cp -p ${builddir}/cvsdone ${newerrors}/cvsdone
|
|
cp -p ${builddir}/cvsdone ${newlogs}/cvsdone
|
|
else
|
|
rm -f ${newerrors}/cvsdone ${newlogs}/cvsdone
|
|
fi
|
|
cp -p ${builddir}/duds ${newerrors}/duds
|
|
cp -p ${builddir}/duds ${newlogs}/duds
|
|
cp -p ${builddir}/ports/${INDEXFILE} ${newerrors}/INDEX
|
|
cp -p ${builddir}/ports/${INDEXFILE} ${newlogs}/INDEX
|
|
|
|
if [ "$incremental" = 1 ]; then
|
|
|
|
# Copy back in the restricted ports that were saved after the
|
|
# previous build
|
|
if [ -d bak/restricted/ ]; then
|
|
cd ${builddir}/bak/restricted
|
|
find . | cpio -dumpl ${builddir}
|
|
fi
|
|
cd ${builddir}
|
|
|
|
# Create hardlinks to previous set of logs
|
|
cd ${oldlogs} && find . -name \*.log\* | cpio -dumpl ${newlogs}
|
|
cd ${olderrors} && find . -name \*.log\* | cpio -dumpl ${newerrors}
|
|
|
|
# Identify the ports that have changed and need to be removed
|
|
# before rebuilding
|
|
cd ${PORTSDIR}
|
|
cut -f 1,2,3,8,9,11,12,13 -d \| ${INDEXFILE}.old | sort > ${INDEXFILE}.old1
|
|
cut -f 1,2,3,8,9,11,12,13 -d \| ${INDEXFILE} | sort > ${INDEXFILE}.1
|
|
comm -2 -3 ${INDEXFILE}.old1 ${INDEXFILE}.1 | cut -f 1 -d \| > ${builddir}/.oldports
|
|
|
|
echo "Removing $(wc -l ${builddir}/.oldports | awk '{print $1}') packages in preparation for incremental build"
|
|
rm ${INDEXFILE}.old1 ${INDEXFILE}.1
|
|
|
|
cd ${PACKAGES}/All
|
|
sed "s,$,${PKGSUFFIX}," ${builddir}/.oldports | xargs rm -f
|
|
${pb}/scripts/prunepkgs ${PORTSDIR}/${INDEXFILE} ${PACKAGES}
|
|
|
|
cd ${builddir}/errors/
|
|
sed "s,\$,.log," ${builddir}/.oldports | xargs rm -f
|
|
sed "s,\$,.log.bz2," ${builddir}/.oldports | xargs rm -f
|
|
|
|
cd ${builddir}/logs/
|
|
sed 's,$,.log,' ${builddir}/.oldports | xargs rm -f
|
|
sed 's,$,.log.bz2,' ${builddir}/.oldports | xargs rm -f
|
|
else
|
|
cd ${builddir}
|
|
|
|
mv packages .packages~
|
|
rm -rf .packages~ &
|
|
mkdir -p packages/All
|
|
fi
|
|
fi
|
|
|
|
# XXX only need to wait for some tasks
|
|
wait
|
|
|
|
if [ "$nobuild" = 0 ]; then
|
|
cd ${builddir}
|
|
|
|
if [ "$cont" = 1 ]; then
|
|
find errors/ -name \*.log | sed -e 's,\.log$,,' -e 's,^errors/,,' > duds.errors
|
|
cat duds duds.errors | sort -u > duds.new
|
|
mv duds.new duds
|
|
else
|
|
cp duds.orig duds
|
|
fi
|
|
|
|
cd ${builddir}/packages/All
|
|
ln -sf ../../Makefile .
|
|
|
|
dobuild ${pb} ${arch} ${branch} ${builddir} 1
|
|
|
|
ls -asFlrt ${builddir}/packages/All > ${builddir}/logs/ls-lrt
|
|
|
|
cd ${builddir}/errors/
|
|
find . -name '*.log' -depth 1 | cpio -dumpl ${builddir}/errors/old-errors
|
|
|
|
# Clean up the clients
|
|
${pb}/scripts/build cleanup ${arch} ${branch} ${buildid}
|
|
|
|
wait
|
|
echo "setting up of nodes ended at $(date)"
|
|
|
|
cd ${builddir}/packages/All
|
|
dobuild ${pb} ${arch} ${branch} ${builddir} 2
|
|
|
|
fi
|
|
|
|
# Clean up temporary duds file
|
|
if [ "$cont" = 1 ]; then
|
|
cp duds.orig duds
|
|
fi
|
|
|
|
cd ${builddir}/packages/All
|
|
if [ "$nofinish" = 0 ]; then
|
|
rm -f Makefile
|
|
|
|
if [ "$norestr" = 0 ]; then
|
|
# Before deleting restricted packages, save a copy so we don't
|
|
# have to rebuild them next time
|
|
${pb}/scripts/keeprestr ${arch} ${branch} ${buildid}
|
|
else
|
|
rm -rf ${builddir}/bak/restricted/
|
|
fi
|
|
|
|
# Always delete restricted packages/distfiles since they're
|
|
# published on the website
|
|
echo "deleting restricted ports"
|
|
sh ${builddir}/restricted.sh
|
|
|
|
if [ "$cdrom" = 1 ]; then
|
|
echo "deleting cdrom restricted ports"
|
|
sh ${builddir}/cdrom.sh
|
|
fi
|
|
|
|
# Remove packages not listed in INDEX
|
|
${pb}/scripts/prunepkgs ${builddir}/ports/${INDEXFILE} ${builddir}/packages
|
|
fi
|
|
|
|
# XXX Checking for bad packages should be done after the package is uploaded
|
|
#rm -rf ${builddir}/bad
|
|
#mkdir -p ${builddir}/bad
|
|
#echo "checking packages"
|
|
#for i in *${PKGSUFFIX}; do
|
|
# if ! ${PKGZIPCMD} -t $i; then
|
|
# echo "Warning: package $i is bad, moving to ${builddir}/bad"
|
|
# # the latest link will be left behind...
|
|
# mv $i ${builddir}/bad
|
|
# rm ../*/$i
|
|
# fi
|
|
#done
|
|
|
|
if [ "$nofinish" = 0 ]; then
|
|
generatemd5 ${pb} ${arch} ${branch} ${builddir} &
|
|
|
|
# Remove INDEX entries for packages that do not exist
|
|
${pb}/scripts/chopindex ${builddir}/ports/${INDEXFILE} ${builddir}/packages > ${builddir}/packages/INDEX
|
|
|
|
ls -asFlrt ${builddir}/packages/All > ${builddir}/logs/ls-lrt
|
|
cp -p ${builddir}/make.[12] ${builddir}/logs
|
|
|
|
echo "================================================"
|
|
echo "copying distfiles"
|
|
echo "================================================"
|
|
echo "started at $(date)"
|
|
cd ${builddir}
|
|
${pb}/scripts/dodistfiles ${arch} ${branch} ${buildid}
|
|
|
|
# Always delete restricted distfiles
|
|
echo "deleting restricted distfiles"
|
|
sh ${builddir}/restricted.sh
|
|
|
|
if [ "$cdrom" = 1 ]; then
|
|
echo "deleting cdrom restricted distfiles"
|
|
sh ${builddir}/cdrom.sh
|
|
fi
|
|
|
|
wait
|
|
|
|
if [ "$branch" != "4-exp" ]; then
|
|
# Currently broken - kk
|
|
#su ${user} -c "${pb}/scripts/cpdistfiles ${branch} > ${builddir}/cpdistfiles.log 2>&1 </dev/null" &
|
|
if [ "$ftp" = 1 ]; then
|
|
echo "ended at $(date)"
|
|
echo "================================================"
|
|
echo "copying packages"
|
|
echo "================================================"
|
|
${pb}/scripts/docppackages ${arch} ${branch} ${builddir}
|
|
fi
|
|
fi
|
|
fi
|
|
|
|
if [ "${nocleanup}" -eq 1 ]; then
|
|
echo "Not cleaning up build, when you are finished be sure to run:"
|
|
echo " ${pb}/scripts/build cleanup ${arch} ${branch} ${buildid} -full"
|
|
else
|
|
${pb}/scripts/build cleanup ${arch} ${branch} ${buildid} -full
|
|
fi
|
|
|
|
endtime=$(date +%s)
|
|
echo "================================================"
|
|
echo "all done at $(date)"
|
|
echo "entire process took $(date -u -j -r $(($endtime - $starttime)) | awk '{print $4}')"
|
|
echo "================================================"
|
|
|
|
mailexit 0 |